![]() LAPAROSCOPIC TOOLS OF VIRTUAL REALITY
专利摘要:
The present invention relates to a virtual reality system that provides a virtual robotic surgical environment, and methods for using the virtual reality system. The virtual reality system can simulate a robotic surgical environment in which a user can operate not only a robotically controlled surgical instrument using a hand controller, but also a manual laparoscopic surgical instrument while still adjacent to the patient's table. For example, the virtual reality system f may include one or more processors configured to generate a virtual robotic surgical environment comprising at least one virtual robotic arm and at least one virtual laparoscopic hand tool, a first handheld device coupled in communication mode. to the virtual reality controller to manipulate at least one virtual robotic arm in the virtual robotic surgical environment, and a second hand device comprising a hand portion and a tool characteristic representative of at least a portion of the manual laparoscopic tool, wherein the second handheld device is coupled in communication mode to the virtual reality controller to manipulate at least one virtual manual laparoscopic tool in the virtual robotic surgical environment. 公开号:BR112019025752A2 申请号:R112019025752-7 申请日:2018-06-27 公开日:2020-06-23 发明作者:Mark Johnson Eric;Eric Mark JOHNSON;Eduardo Garcia Kilroy Pablo;Pablo Eduardo Garcia Kilroy;Fai Kin Siu Bernard;Bernard Fai Kin SIU;Yu Haoran;Haoran YU 申请人:Verb Surgical Inc.; IPC主号:
专利说明:
[0001] [0001] This application claims priority to US patent application No. 62 / 526,896, filed on June 29, 2017, the content of which is incorporated herein by reference in its entirety. TECHNICAL FIELD [0002] [0002] The present invention relates in general to the field of robotic surgery and, more specifically, to new and useful systems and methods for providing virtual robotic surgical environments. BACKGROUND OF THE INVENTION [0003] [0003] Minimally invasive surgery (MIS), such as laparoscopic surgery, involves techniques intended to reduce tissue damage during a surgical procedure. For example, laparoscopic procedures typically involve creating a number of small incisions in the patient (for example, in the abdomen), and introducing one or more surgical instruments (for example, a terminal effector, at least one camera, etc.) through the incisions inside the patient. Surgical procedures can then be performed using the surgical instruments introduced, with the visualization aid provided by the camera. [0004] [0004] In general, MIS offers several benefits, such as reduced patient scarring, less patient pain, shorter patient recovery periods and lower medical treatment costs associated with patient recovery. In some embodiments, MIS can be performed with robotic systems that include one or more robotic arms to manipulate surgical instruments based on commands from an operator. A robotic arm can, for example, support various devices at its distal end, such as surgical end effectors, imaging devices, cannulas to provide access to the patient's body cavity and organs, etc. [0005] [0005] Robotic surgical systems are generally complex systems that perform complex procedures. Thus, a user (for example, surgeons) in general may need significant training and experience to successfully operate a robotic surgical system. Such training and experience are advantageous for effectively planning the specifics of MIS procedures (for example, determining the number, location, and optimal orientation of the robotic arms, determining the number and optical location of the incisions, determining optimal types and sizes of surgical instruments , determine the order of actions in a procedure, etc.). [0006] [0006] Additionally, the process of configuring robotic surgical systems can also be complicated. For example, hardware improvements (for example, robotic arms) are prototyped as physically and physically tested modalities. Software improvements (for example, control algorithms for robotic arms) may also require physical modalities. Said prototypes and cyclic tests are generally cumulatively expensive and time-consuming. SUMMARY OF THE INVENTION [0007] [0007] In general, a virtual reality system to provide a virtual robotic surgical environment may include a virtual reality processor (for example, a processor on a computer that implements instructions stored in memory) to generate a virtual robotic surgical environment, a head-mounted screen that can be used by a user, and one or more portable controllers that can be manipulated by the user to interact with the virtual robotic surgical environment. The virtual reality processor can, in some variations, be configured to generate a virtual robotic surgical environment based on at least one predetermined configuration file that describes a virtual component (for example, virtual robotic component) in the virtual environment. The head-mounted screen may include an immersion screen to display the virtual robotic surgical environment to the user (for example, with a first-person perspective view of the virtual environment). In some variations, the virtual reality system may additionally or alternatively include an external screen to display the virtual robotic surgical environment. The immersion screen and the external screen, if both are present, can be synchronized to show the same or similar content. The virtual reality system can be configured to generate a virtual robotic surgical environment within which a user can navigate around a virtual operating room and interact with virtual objects through the screen mounted on the head and / or portable controllers. The virtual reality system (and variations thereof, as further described here) can function as a useful tool in relation to robotic surgery, in applications that include, but are not limited to training, simulation, and / or collaboration between multiple people. [0008] [0008] In some variations, a virtual reality system can interface with a real or current (non-virtual) operating room. The virtual reality system may allow the visualization of a robotic surgical environment, and may include a virtual reality processor configured to generate a virtual robotic surgical environment that comprises at least one virtual robotic component, and at least one sensor in a robotic surgical environment. . The sensor can be in communication with the virtual reality processor and configured to detect a status of a robotic component that corresponds to the virtual robotic component. The virtual reality processor is configured to receive the detected status of the robotic component and modify the virtual robotic component based at least in part on the detected status so that the virtual robotic component mimics the robotic component. [0009] [0009] For example, a user can monitor a current robotic surgical procedure in a real operating room through a virtual reality system that interfaces with the real operating room (for example, the user can interact with a reality environment which is a reflection of conditions in the actual operating room). The positions detected of the robotic components during a surgical procedure can be compared with their expected positions as determined from the surgical pre-planning in a virtual environment, so that deviations from the surgical plan can cause a surgeon to make adjustments to avoid collisions (for example, changing the position of a robotic arm, etc.). [0010] [0010] In some variations, the one or more sensors can be configured to detect characteristics or status of a robotic component such as position, orientation, speed, and / or speed. As an illustrative example, the one or more sensors in the robotic surgical environment can be configured to detect the position and / or orientation of a robotic component such as a robotic arm. The position and orientation of the robotic arm can be fed to the virtual reality processor, which moves or otherwise modifies a virtual robotic arm that corresponds to the current robotic arm. As such, a user who is viewing the virtual robotic surgical environment can view the adjusted virtual robotic arm. As another illustrative example, one or more sensors can be configured to detect the collision involving the robotic component in the robotic surgical environment, and the system can provide an alarm notifying the user of the collision occurring. [0011] [0011] Within the virtual reality system, several user modes allow different types of interactions between a user and the virtual robotic surgical environment. For example, a variation of a method to facilitate navigation in a virtual robotic surgical environment includes displaying a first-person perspective view of the virtual robotic surgical environment from a first vantage point within the virtual robotic surgical environment, displaying a first window view of the virtual robotic surgical environment from a second vantage point and display a second window view of the virtual robotic surgical environment from a third vantage point. The first and second views of the window can be displayed in respective regions of the view displayed in a first person perspective. In addition, the method may include, in response to an input of user information associating the first and second views of the window, which connect in sequence the first and second views of the window to generate a trajectory between the second and third vantage points. Window views of the virtual robotic surgical environment can be displayed at different scale factors (for example, zoom levels), and can offer views of the virtual environment from any suitable vantage point in the virtual environment, such as within a virtual patient, above the virtual patient, etc. [0012] [0012] In response to a user information entry that indicates the selection of a particular window view, the method may include displaying a new first person perspective view of the virtual environment from the vantage point of the selected window view . In other words, window views can, for example, operate as portals that facilitate transport between different vantage points within the virtual environment. [0013] [0013] As another example of user interaction between a user and the virtual robotic surgical environment, a variation of a method to facilitate visualization of a virtual robotic surgical environment includes displaying a first-person perspective view of the virtual robotic surgical environment from from a first vantage point within the virtual robotic surgical environment, receiving an input from the user that indicates placing a virtual camera at a second advantage point within the different virtual robotic surgical environment from the first vantage point, generate a perspective view of the virtual camera of the virtual robotic surgical environment from the second vantage point, and display the perspective view of the virtual camera in a region of the view displayed in first person perspective. The camera view can, for example, provide a supplementary view of the virtual environment for the user that allows the user to monitor various aspects of the environment simultaneously while still maintaining primary focus in a main first person perspective view. In some variations, the method may additionally include receiving an input from the user that indicates the selection of a type of virtual camera (for example, cinema camera configured to be arranged outside a virtual patient, an endoscopic camera configured to be arranged inside a virtual patient, a 360 degree camera, etc.) and display a virtual model of the type of virtual camera selected in the second vantage point within the virtual robotic surgical environment. Other examples of user interactions with the virtual environment are described in this document. [0014] [0014] In another variation of a virtual reality system, the virtual reality system can simulate a robotic surgical environment in which a user can operate not only a robotically controlled surgical instrument using a hand controller, but also a manual laparoscopic surgical instrument (for example, while still adjacent to the patient's table, or "on the bed"). For example, [0015] [0015] The second handheld device can be modular. For example, the tool feature can be removable from the hand portion of the second hand device, thereby allowing the second hand device to function as a handheld laparoscopic device (to control the virtual manual laparoscopic tool) when the feature tool holder is attached to the hand portion, just like a non-laparoscopic hand device (for example, to control a robotically controlled tool or robotic arm) when the tool feature is detached from the hand portion. In some variations, the hand portion of the second hand device may be substantially similar to the first hand device. [0016] [0016] The hand portion of the second hand device may include an interactive feature, such as a trigger or button, that triggers a function of the virtual manual laparoscopic tool in response to the engagement of the interactive features by a user. For example, a trigger on the hand portion of the second hand device can be mapped to a virtual trigger on the virtual manual laparoscopic tool. As an illustrative example, in a variation in which the virtual manual laparoscopic tool is a virtual manual laparoscopic stapler, a trigger in the hand portion can be mapped to launch a virtual staple in the virtual environment. Other aspects of the system can additionally approximate the configuration of the virtual tool in the virtual environment. For example, the virtual reality system may additionally include a patient simulator (for example, prototype of the patient's abdomen) that includes a cannula configured to receive at least a portion of the tooling feature of the second hand device to thereby additionally simulate the user's feeling of a manual laparoscopic tool. [0017] [0017] In general, a computer-implemented method for operating a virtual robotic surgical environment may include generating a virtual robotic surgical environment using a user application, where the virtual robotic surgical environment includes at least one virtual robotic component, and passing information between two software applications in order to make movements of the virtual robotic component. For example, in response to input from user information to move the at least one virtual robotic component in the virtual robotic surgical environment, the method may include passing status information relating to at least one virtual robotic component from the user application to a server application, generate a trigger command based on inputting user information and status information using the server application, passing the trigger command from the server application to the user application, and moving the hair least one virtual robotic component based on the drive command. The user application and the server application can be run on a shared processor device, or on separate processor devices. [0018] [0018] In some variations, passing status information and / or passing the trigger command may include invoking an application programming interface (API) to support communication between client and server applications. The API can include one or more data structure definitions for virtual robotic components and other virtual components in the virtual environment. For example, the API can include a plurality of data structures for a virtual robotic arm, a virtual robotic arm segment (e.g., connection), a virtual patient table, a virtual cannula, and / or a virtual surgical instrument. As another example, the API can include a data structure for a virtual touch point to allow manipulation of at least one virtual robotic component (for example, virtual robotic arm) or another virtual component. [0019] [0019] For example, the method may include passing status information relating to a virtual robotic arm, such as position and orientation (for example, pose of the virtual robotic arm). The user application can pass said status information to the server application, with which the server application can generate a trigger command based on kinematics associated with the virtual robotic arm. [0020] [0020] As described in this document, there are several applications and uses for the virtual reality system. In one variation, the virtual reality system can be used to accelerate the R&D cycle during the development of a robotic surgical system, such as by allowing simulation of potential design without the significant time and expense of physical prototypes. For example, a method for designing a robotic surgical system may include generating a virtual model of a robotic surgical system, testing the virtual model of the robotic surgical system in the virtual operating room environment, modifying the virtual model of the robotic surgical system based on the test, and generate a real model of the robotic surgical system based on the modified virtual model. Testing the virtual model may, for example, involve performing a virtual surgical procedure using a virtual robotic arm and a virtual surgical instrument supported by the virtual robotic arm, such as through the user application described in this document. During a test, the system can detect one or more collision events involving the virtual robotic arm, which can, for example, trigger the modification to the virtual model (for example, modifying the virtual robotic arm in the length, diameter, of the connection, etc.) in response to the detected collision event. Additional testing of the modified virtual model can then be performed, to thereby confirm whether the modification reduced the likelihood of a collision event occurring during the virtual surgical procedure. In this way, testing and modifying robotic surgical system designs in a virtual environment can be used to identify items before testing physical prototypes of the designs. [0021] [0021] In another variation, the virtual reality system can be used to test a controlled mode for a robotic surgical component. For example, a method for testing a controlled mode for a robotic surgical component may include generating a virtual robotic surgical environment, the virtual robotic surgical environment that comprises at least one virtual robotic component that corresponds to the robotic surgical component, emulating a controlled mode for the robotic surgical component in the virtual robotic surgical environment, and, in response to input from the user to move the at least one virtual robotic component, move the at least one virtual robotic component according to the emulated controlled mode. In some variations, moving the virtual robotic component may include passing status information relating to at least one virtual robotic component from a first application (for example, application of a virtual operating environment) to a second application (for example, application kinematics), generate a trigger command based on status information and emulated controlled mode, pass the trigger command from the second application to the first application, and move at least one virtual robotic component in the virtual robotic surgical environment based on the drive command. [0022] [0022] For example, the controlled mode to be tested can be the path followed by the controlled mode for a robotic arm. In the next trajectory, the movement of the robotic arm can be programmed then emulated using the virtual reality system. Thus, when the system is used to emulate the trajectory after the controlled mode, the trigger command generated by the application of kinematics may include generating a trigger command for each of a plurality of virtual joints in the virtual robotic arm. This set of triggered commands can be implemented by an application of a virtual operating environment to move the virtual robotic arm in the virtual environment, thereby allowing testing for collision, volume or movement workspace, etc. [0023] [0023] Other variations and examples of virtual reality systems, their user modes and interactions, and applications and uses of virtual reality systems are described in further detail in this document. BRIEF DESCRIPTION OF THE DRAWINGS [0024] [0024] Figure 1A illustrates an example of an operating room arrangement with a robotic surgical system and a surgeon's console. Figure 1B is a schematic illustration of an example of a variation of a robotic arm manipulator, tool driver, and cannula with a surgical tool. [0025] [0025] Figure 2A is a schematic illustration of a variation of a virtual reality system. Figure 2B is a schematic illustration of an immersion screen to display an immersion view of a virtual reality environment. [0026] [0026] Figure 3 is a schematic illustration of components of a virtual reality system. [0027] [0027] Figure 4A is an example of a structure for communication between a virtual reality environment application and a kinematics application for use in a virtual reality system. Figures 4B and 4C are tables that summarize examples of data structures and fields for an application program interface for communication between the virtual reality environment application and the application of kinematics. [0028] [0028] Figure 5A is a schematic illustration of another variation of a virtual reality system that includes an example of a variation of a laparoscopic handheld controller. Figure 5B is a schematic illustration of an immersion screen to display an immersion view of a virtual reality environment that includes the virtual manual laparoscopic tool controlled by the handheld laparoscopic controller. [0029] [0029] Figure 6A is a perspective view of an example of a variation of a laparoscopic handheld controller. Figure 6B is a schematic illustration of a virtual manual laparoscopic tool superimposed on part of the handheld laparoscopic controller shown in Figure 6A. Figures 6C-6E are a side view, a detailed partial perspective view, and a partial cross-sectional view, respectively, of the laparoscopic handheld controller shown in Figure 6A. [0030] [0030] Figure 7 is a schematic illustration of another variation of a virtual reality system that interfaces with a robotic surgical environment. [0031] [0031] Figure 8 is a schematic illustration of a menu displayed for selecting one or more user modes from a variation of a virtual reality system. [0032] [0032] Figures 9A to 9C are schematic illustrations of a virtual robotic surgical environment with example portals. [0033] [0033] Figures 10A and 10B are schematic illustrations of an example of a virtual robotic surgical environment seen in a flight mode. Figure 10C is a schematic illustration of a transition region to modify the view of the example of a virtual robotic surgical environment in flight mode. [0034] [0034] Figure 11 is a schematic illustration of a virtual robotic surgical environment seen from a vantage point providing an example of a model view of a virtual operating room. [0035] [0035] Figure 12 is a schematic illustration of a view of a virtual robotic surgical environment with an example of an enlarged view to show supplementary views. [0036] [0036] Figure 13 is a schematic illustration of a screen provided by a variation of a virtual reality system that operates in a virtual command station mode. [0037] [0037] Figure 14 is a flow chart of an example of variation of a method to operate a user mode menu for the selection of user modes in a virtual reality system. [0038] [0038] Figure 15 is a flow chart of an example of variation of a method to operate in a rotating mode of view of the environment in a virtual reality system. [0039] [0039] Figure 16 is a flow chart of an example of variation of a method to operate a user mode allowing pressure points in a virtual environment. DETAILED DESCRIPTION [0040] [0040] Examples of various aspects and variations of the present invention are described in this document and illustrated in the accompanying drawings. The following description is not intended to limit the invention to these modalities, but rather to allow a person skilled in the art to make and use the present invention. OVERVIEW OF THE ROBOTIC SURGICAL SYSTEM [0041] [0041] An example of a robotic surgical system and surgical environment is illustrated in Figure 1A. As shown in Figure 1A, a robotic surgical system 150 can include one or more robotic arms 160 located on a surgical platform (for example, table, bed, etc.), where end effectors or surgical tools are attached to the distal ends of the robotic arms. 160 for performing a surgical procedure. For example, a robotic surgical system 150 may include, as shown in the schematic example of Figure 1B, at least one robotic arm 160 coupled to the surgical platform, and a tool driver 170 generally attached to the distal end of the robotic arm 160. The cannula 100 coupled to the end of a tool driver 170 can receive and guide a surgical instrument 190 (e.g., end effector, camera, etc.). In addition, the robotic arm 160 may include a plurality of connections that are actuated to position and orient a tool driver 170, which drives the surgical instrument 190. The robotic surgical system may additionally include a control tower 152 (e.g. which includes a power source, computing equipment, etc.) and / or other equipment suitable for providing functional support to robotic components. [0042] [0042] In some variations, a user (such as a surgeon or other operator) can use a user console 100 to remotely manipulate robotic arms 160 and / or surgical instruments (e.g., teleoperation). User console 100 can be located in the same procedure room as robotic system 150, as shown in Figure 1A. In other embodiments, user console 100 can be located in an adjacent or nearby room, or teleoperated from a remote location in a different building, city, or country. In one example, user console 100 comprises a seat 110, foot operated controls 120, one or more portable user interface devices 122, and at least one user screen 130 configured to display, for example, the field view surgical procedure within a patient. For example, as shown in the example user console shown in Figure 1C, a user located on seat 110 who is viewing user screen 130 can manipulate foot operated controls 120 and / or portable user interface devices to remotely control the robotic arms 160 and / or surgical instruments. [0043] [0043] In some variations, a user can operate the robotic surgical system 150 in an "over-bed" (OTB) mode, in which the user is on the patient's side and simultaneously manipulates a robotically driven tool / terminal effector fixed the same (for example, with a handheld user interface device 122 held in one hand) and the manual laparoscopic tool. For example, the user's left hand may be manipulating a handheld user interface device 122 to control a robotic surgical component, while the user's right hand may be manipulating the manual laparoscopic tool. Thus, in these variations, the user can perform not only MIS with robotic assistance, but also manual laparoscopic techniques on a patient. [0044] [0044] During an example of a procedure or surgery, the patient is prepared and covered in a sterile way, and anesthesia is achieved. Initial access to the surgical field can be performed manually with the robotic system 150 in a trailer configuration or withdrawal configuration to facilitate access to the surgical field. Once access is complete, initial positioning and / or preparation of the robotic system can be performed. During the surgical procedure, a surgeon or other user on user console 100 can use foot operated controls 120 and / or user interface devices 122 to manipulate various terminal effectors and / or imaging systems to perform the procedure. Manual assistance can also be provided on the procedure table by personnel in sterile clothing, who can perform tasks that include, but are not limited to, collecting organs, or performing manual repositioning or tool changes involving one or more robotic arms 160. Personnel with clothing non-sterile can also be present to assist the surgeon in user console 100. When the procedure or surgery is completed, the robotic system 150 and / or user console 100 can be configured or adjusted in a state to facilitate one or more procedures post-operative, which include, but are not limited to, robotic system 150 cleaning and / or sterilization, and / or entry or printing of healthcare record, whether electronic or hard copy, such as via the user console [0045] [0045] In Figure 1A, the robotic arms 160 are shown with a system mounted on the table, but in other modalities, the robotic arms can be mounted on a cart, ceiling or side wall, or on another suitable support surface. Communication between the robotic system 150, user console 100, and any other screens can be via wired and / or wireless connections. Any wired connections can optionally be built into the floor and / or walls or ceiling. Communication between user console 100 and robotic system 150 can be wired and / or wireless, and can be proprietary and / or carried out using any of a variety of data communication protocols. In still other variations, the user console 100 does not include an integrated screen 130, but it can provide a video output that can be connected for output to one or more generic screens, which includes remote screens accessible via the internet or the network. The video output or feed can also be encrypted for privacy and all or part of the video output can be saved on a server or electronic health record system. [0046] [0046] In other examples, additional user consoles 100 may be provided, for example, to control additional surgical instruments, and / or to obtain control of one or more surgical instruments on a main user console. This will allow, for example, a surgeon to assume or illustrate a technique during a surgical procedure with medical students and doctors in training, or to assist during complex surgeries that require multiple surgeons acting simultaneously or in a coordinated manner. VIRTUAL REALITY SYSTEM [0047] [0047] A virtual reality system to provide a virtual robotic surgical environment is described in this document. As shown in Figure 2A, a virtual reality system 200 can include a virtual reality processor 210 (for example, a processor on a computer that implements instructions stored in memory) to generate a virtual robotic surgical environment, a screen mounted on the head 220 that can be used by a U user, and one or more portable controllers 230 manipulable by the U user to interact with the virtual robotic surgical environment. As shown in Figure 2B, the head-mounted screen 220 may include an immersion screen 222 for displaying the virtual robotic surgical environment to user U (for example, with a first-person perspective view of the virtual environment). The immersion screen can, for example, be a stereoscopic screen provided by sets of eye pieces. In some variations, the virtual reality system 200 may additionally or alternatively include an external screen 240 to display the virtual robotic surgical environment. The immersion screen 222 and the external screen 240, if both are present, can be synchronized to show the same content or similar content. [0048] [0048] As described in further details in this document, the virtual reality system (and variations thereof, as further described in this document) can function as a useful tool with respect to robotic surgery, in applications that include, but are not limited to training, simulation, and / or collaboration between multiple people. More specific examples of applications and uses of the virtual reality system are described in this document. [0049] [0049] In general, the virtual reality processor is configured to generate a virtual robotic surgical environment within which a user can navigate around a virtual operating room and interact with virtual objects through the screen mounted on the head and / or portable controllers. For example, a virtual robotic surgical system can be integrated within a virtual operating room, with one or more virtual robotic components having three-dimensional meshes and selected characteristics (for example, dimensions and kinematics restrictions of virtual robotic arms and / or surgical tools numbers, number and arrangement, etc.). Other virtual objects, such as virtual control towers or other virtual equipment representing equipment supporting the robotic surgical system, a virtual patient, a virtual table or other surface for the patient, virtual medical staff, a virtual user console, etc., they can also be integrated into the virtual reality operating room. [0050] [0050] In some variations, the screen mounted on the head 220 and / or the portable controllers 230 can be modified versions of those included in any suitable virtual reality hardware system that is commercially available for applications that include virtual and augmented reality environments (for example, for games and / or military purposes) and are familiar to those skilled in the art. For example, the head mounted screen 220 and / or portable controllers 230 can be modified to allow interaction by a user with a virtual robotic surgical environment (for example, a handheld controller 230 can be modified as described below to operate as laparoscopic handheld controller). The hand controller may include, for example, a ported device (eg stick, remote device, etc.) and / or clothing worn in the user's hands (eg gloves, rings, bracelets, etc.) and which includes sensors and / or configured to cooperate with external sensors to thereby provide tracking of the user's hands, individual finger (s), wrist (s), etc. Other suitable controllers may additionally or alternatively be used (for example, sleeves configured to provide tracking of the user's arm (s)). [0051] [0051] In general, a U user can wear the head-mounted screen 220 and carry (or wear) at least one hand controller 230 as he moves around a physical workspace, such as a training room. While using the head-mounted monitor 220, the user can view a first-person perspective view of the virtual robotic surgical environment generated by the virtual reality processor 210 and displayed on the immersive monitor 222. As shown in Figure 2B, the view displayed on the immersion screen 222 may include one or more graphical representations 230 of handheld controllers (e.g., virtual models of handheld controllers, virtual models of human hands in place of handheld controllers or handheld controllers, etc.). A similar first-person perspective view can be displayed on an external screen 240 (for example, for assistants, mentors or other suitable people to view). As the user moves and navigates within the workspace, the virtual reality processor 210 can change the view of the virtual robotic surgical environment displayed on the 222 immersion screen based at least in part on the location and orientation of the mounted screen in the head (and therefore the user's location and orientation), allowing the user to feel as if he is exploring and moving within the virtual robotic surgical environment. [0052] [0052] Additionally, the user can additionally interact with the virtual robotic surgical environment by moving and / or manipulating portable controllers 230. For example, portable controllers [0053] [0053] In some variations, the virtual reality system can engage other user senses. For example, the virtual reality system may include one or more audio devices (for example, headphones for the user, speakers, etc.) to relay audio feedback to the user. As another example, the virtual reality system can provide tactile feedback, such as vibration, on one or more of the portable controllers 230, the head-mounted screen 220, or other haptic devices that come into contact with the user (for example, gloves , bracelets, etc.). VIRTUAL REALITY PROCESSOR [0054] [0054] The virtual reality processor 210 can be configured to generate a virtual robotic surgical environment within which a user can navigate around a virtual operating room and interact with virtual objects. A general schematic illustration of an example of interaction between the virtual reality processor and at least some components of the virtual reality system is shown in Figure 3. [0055] [0055] In some variations, the virtual reality processor 210 may be in communication with hardware components such as the head-mounted display 220, and / or portable controllers 230. For example, the virtual reality processor 210 may receive input from from sensors on a screen mounted on the head 220 to determine the location and orientation of the user within the physical workspace, which can be used to generate a suitable corresponding first person perspective view of the virtual environment to display on the screen mounted on the head 220 for the user. As another example, virtual reality control 210 can receive input from sensors on handheld controllers 230 to determine the location and orientation of handheld controllers 230, which can be used to generate suitable graphical representations of handheld controllers 230 to display on a screen mounted on the head 220 for the user, as well as translating the user's information input (to interact with the virtual environment) into corresponding modifications of the virtual robotic surgical environment. The virtual reality processor 210 can be attached to an external screen 240 (for example, a monitor screen) that is visible to the user in a non-immersion mode and / or to other people such as assistants or mentors who may wish to view user interactions with the virtual environment. [0056] [0056] In some variations, the virtual reality processor 210 (or multiple processing machines) can be configured to run one or more software applications to generate the virtual robotic surgical environment. For example, as shown in Figure 4, the virtual reality processor 210 can use at least two software applications, which includes an application of a virtual operating environment 410 and a kinematics application 420. The application of a virtual operating environment and the application of kinematics can communicate through a client-server model. For example, the application of a virtual operating environment can operate as a client, although the application of kinematics can operate as a server. The application of virtual operating environment 410 and the application of kinematics 420 can be run on the same processing machine, or on separate processing machines coupled via a computer network (for example, the client or server can be a device remote, or the machines can be on a local computer network). In addition, it should be understood that in other variations, the application of a virtual operating environment 410 and / or the application of kinematics 420 may interface with other software components. In some variations, the application of a virtual operating environment 410 and the application of kinematics 520 may invoke one or more application program interfaces (APIs), which define the way in which applications communicate with each other. [0057] [0057] The virtual operating environment 410 may allow a description or definition of a virtual operating room environment (for example, the operating room, operating table, control tower or other components, user console, robotic arms, table adapter connections coupling the robotic arms to the operating table, etc.). At least some descriptions of a virtual operating room environment can be saved (for example, in a model 202 virtual reality component database) and provided to the processor as configuration files. [0058] [0058] In some variations, as shown in Figure 3, the virtual reality processor 210 may additionally or alternatively be in communication with a database of patient records 204, which can store specific patient information. Said patient-specific information can include, for example, patient image data (for example, X-ray, MRI, CT, ultrasound, etc.), medical history, and / or patient metrics (for example, age, weight , height, etc.), although other appropriate patient-specific information can additionally or alternatively be stored in the patient record database 204. When generating the virtual robotic surgical environment, the virtual reality processor 210 can receive patient-specific information from the patient records database 204 and integrate at least some of the information received within the virtual reality environment. For example, a realistic representation of the patient's body or other tissue can be generated and incorporated within the virtual reality environment (for example, a 3D model generated from a combined stack of 2D images, such as MRI images), which it can be useful, for example, to determine desirable robotic arm arrangements around the patient, optimal door placement, etc. specific to a particular patient, as further described in this document. As another example, patient image data can be overlaid over a portion of the user's field of view of the virtual environment (for example, superimposing an ultrasound image of a patient's tissue over the patient's virtual tissue). [0059] [0059] In some variations, the virtual reality processor 210 may include one or more kinematics algorithms by applying kinematics 420 to at least partially describe the behavior of one or more components of the virtual robotic system in the virtual robotic surgical environment. For example, one or more algorithms can define how a virtual robotic arm responds to user interactions (for example, moving the virtual robotic arm by selecting and manipulating a touch point on the virtual robotic arm), or how a virtual robotic arm operates in a selected controlled mode. Other kinematics algorithms, such as those that define the operation of a virtual tool driver, a virtual patient table, or other virtual components, can additionally or alternatively be embedded in the virtual environment. By embedding in the virtual environment one or more kinematics algorithms that accurately describe the behavior of a current (real) robotic surgical system, the virtual reality processor 210 can allow the virtual robotic surgical system to work accurately or realistically compared to a physical implementation of a corresponding real robotic surgical system. For example, the virtual reality processor 210 can embed at least one control algorithm that represents or corresponds to one or more controlled modes that define the movement of a robotic component (for example, arm) [0060] [0060] For example, the application of kinematics 420 may allow the description or definition of one or more virtual control modes, such as for virtual robotic arms or other suitable virtual components in the virtual environment. In general, for example, a controlled mode for a virtual robotic arm can correspond to a function block that allows the virtual robotic arm to perform or perform a particular task. For example, as shown in Figure 4, a control system 430 can include multiple virtual control modes 432, 434, 436, etc. governing the activation of at least one joint in the virtual robotic arm. The virtual control modes 432, 434, 436, etc. can include at least one primitive mode (which governs the underlying behavior for triggering at least one joint) and / or at least one user mode (which governs a higher level, specific task behavior and can use one or more modes primitive). In some variations, a user can activate a virtual touch point surface of a virtual robotic arm or other virtual object, thereby triggering a particular controlled mode (for example, through a state machine or another controller). In some variations, a user can directly select a particular controlled mode through, for example, a menu displayed in a first-person perspective view of the virtual environment. [0061] [0061] Examples of primitive virtual control modes include, but are not limited to, joint command mode (which allows a user to directly activate a single virtual joint individually, and / or multiple virtual joints collectively), the compensation mode of gravity (in which the virtual robotic arm keeps itself in a particular pose, with particular position and orientation of the connections and joints, without dragging downwards due to simulated gravity), and way of following the trajectory (in which the robotic arm can move to follow the sequence of one or more Cartesian commands or other trajectory commands). Examples of user modes that incorporate one or more primitive control modes include, but are not limited to, an inactive mode (in which the virtual robotic arm can stand in a current or starting pose waiting for additional commands), a configuration mode (in which the virtual robotic arm can transition to a starting configuration pose or a predetermined model pose for a particular type of surgical procedure), and an anchoring mode (in which the robotic arm facilitates the process in which the user fixes the robotic arm to a part, such as with gravity compensation, etc.). [0062] [0062] In general, the application of a virtual operating environment 410 and the application of kinematics 420 can communicate with each other through a predefined communication protocol, such as an application program interface (APIs) that organizes information (for example, status or other characteristics) of virtual objects and other aspects of the virtual environment. For example, the API can include data structures that specify how to communicate information about virtual objects such as a virtual robotic arm (on a total basis and / or on a segment-by-segment basis) to the virtual table, a table adapter virtual connecting a virtual arm to the virtual table, a virtual cannula, a virtual tool, a virtual touch point to facilitate user interaction with the virtual environment, user information entry system, portable controller devices, etc. In addition, the API can include one or more data structures that specify how to communicate information about events in the virtual environment (for example, a collision event between two virtual entities) or other aspects related to the virtual environment (for example, frame of reference for display the virtual environment, control system structure, etc.). Example data structures and example fields to contain your information are listed and described in Figure 4B and 4C, although it should be understood that other variations of an API may include any suitable types, names and numbers of data structures and example field structures. [0063] [0063] In some variations, as generally illustrated schematically in Figure 4A, the application of a virtual operating environment 410 passes status information to the application of kinematics 420, and the application of kinematics 420 passes commands to the application of a virtual operating environment 410 through the API, where commands are generated based on status information and subsequently used by the virtual reality processor 210 to generate changes in the virtual robotic surgical environment. For example, a method for inserting one or more kinematics algorithms in a virtual robotic surgical environment for the control of a virtual robotic arm may include passing status information relating to at least a portion of the virtual robotic arm from an application of a virtual operating environment 410 for a kinematics application 420, algorithmically determining a drive command to drive at least one virtual joint of the virtual robotic arm, and passing the drive command from a kinematics application 420 to the application of an environment virtual operating system 410. The virtual reality processor 210 can subsequently move the virtual robotic arm according to the drive command. [0064] [0064] As an illustrative example for controlling a virtual robotic arm, a gravity compensation mode controlled by a virtual robotic arm can be invoked, thereby requiring one or more virtual joint drive commands in order to neutralize the forces of gravity simulated at the virtual joints in the virtual robotic arm. The application of a virtual operating environment 410 may pass to the application of kinematics 420 relevant status information relating to the virtual robotic arm (for example, position of at least a portion of the virtual robotic arm, position of the virtual patient table on which the virtual robotic arm is mounted, position of a virtual touch point that the user may have manipulated to move the virtual robotic arm, joint angles between adjacent virtual arm connections) and status information (for example, simulated gravitational force direction in virtual robotic arm). Based on the status information received from an application of a virtual operating environment 410 and known kinematic and / or dynamic properties of the virtual robotic arm and / or virtual tool drive attached to the virtual robotic arm (for example, known from of a configuration file, etc.), the 430 control system can algorithmically determine which force applied to each virtual joint is needed to compensate for the simulated gravitational force acting on that virtual joint. For example, the 430 control system can use a forward kinematics algorithm, an inverse algorithm, or any suitable algorithm. Once the force command triggered for each relevant virtual joint of the virtual robotic arm is determined, the application of kinematics 420 can send the commands from outside to the application of a virtual operating environment 410. The virtual reality processor can subsequently trigger the virtual joints of the virtual robotic arm according to the force commands, thereby causing the virtual robotic arm to be viewed as maintaining its current pose despite the simulated gravitational force in the virtual environment (for example, instead of falling or collapsing under simulated gravitational force). [0065] [0065] Another example to control a virtual robotic arm is to follow the trajectory of a robotic arm. Following the trajectory, the movement of the robotic arm can be programmed and then emulated using the virtual reality system. Thus, when the system is used to emulate a path planning control mode, the trigger command generated by the application of kinematics can include generating a trigger command for each of a plurality of virtual joints in the virtual robotic arm. This set of triggered commands can be implemented by applying a virtual operating environment to move the virtual robotic arm in the virtual environment, thereby allowing testing for collision, volume or motion workspace, etc. [0066] [0066] Other virtual control algorithms for the virtual robotic arm and / or other virtual components (for example, connections of the virtual table adapter coupling the virtual robotic arm to a virtual operating table) can be implemented through similar communication between the application of a virtual operating environment 410 and the application of kinematics 420. [0067] [0067] Although the virtual reality processor 210 is generally referred to in this document as a single processor, it should be understood that in some variations, multiple processors can be used to make the processors described in this document. The one or more processors may include, for example, a general purpose computer processor, a special purpose computer or controller, or another programmable data processing device or component, etc. Generally, one or more processors can be configured to execute instructions stored on any suitable computer-readable media. Computer-readable media can include, for example, magnetic media, optical media, magneto-optical media and hardware devices that are specially configured to store and execute program code, such as application-specific integrated circuits (ASICs), logic devices (PLDs), ROM and RAM devices, flash memory, EEPROMs, optical devices (for example, CD or DVD), hard drives, floppy drives or any other suitable device. Examples of computer program code include machine code, as produced by a compiler, and files that contain top-level code that are executed by a computer using an interpreter. For example, a variation can be implemented using C ++, JAVA or another object-oriented programming language and suitable development tools. As another example, another variation can be implemented on connected circuits instead of, or in combination with, machine executable software instructions. HEAD MOUNTED SCREEN AND PORTABLE CONTROLLERS [0068] [0068] As shown in Figure 2A, a U user can use a head mounted screen 220 and / or hold one or more portable controllers 230. The head mounted screen 220 and portable controllers 230 can generally allow a user to browse and / or interact with the virtual robotic surgical environment generated by the virtual reality processor 210. The head-mounted screen 220 and / or portable controllers 230 can communicate signals to the virtual reality processor 210 through a wired or wireless connection thread. [0069] [0069] In some variations, the screen mounted on the head 220 and / or the portable controllers 230 can be modified versions of those included in any suitable virtual reality hardware system that is commercially available for applications that include virtual and augmented reality environments . For example, the head mounted screen 220 and / or portable controllers 230 can be modified to allow interaction by a user with a virtual robotic surgical environment (for example, a handheld controller 230 can be modified as described below to operate as laparoscopic handheld controller). In some variations, the virtual reality system may additionally include one or more tracking emitters 212 that emit infrared light within a workspace for user U. Tracking emitters 212 can, for example, be mounted on a wall, ceiling, fixing, or other suitable mounting surface. Sensors can be coupled to surfaces facing outside the screen mounted on the head 220 and / or portable controllers 230 to detect the emitted infrared light. Based on the location of any sensors that detect the emitted light and when said sensors detect the emitted light after the light is emitted, the virtual reality processor 220 can be configured to determine (for example, through triangulation) the location and the head-mounted screen orientation 220 and / or portable controllers 230 within the workspace. In other variations, other suitable means (for example, other sensor technologies, such as accelerometers or gyroscopes, other sensor arrangements, etc.) can be used to determine the location and orientation of the head-mounted screen 220 and portable controllers [0070] [0070] In some variations, the screen mounted on the head 220 may include strips (for example, with buckles, elastics, springs, etc.) that facilitate the assembly of the screen 220 on the user's head. For example, the head-mounted screen 220 can be structured in a similar way to goggles, a headband or headset, a cap, etc. The head-mounted screen 220 may include two sets of eyepieces that provide a stereoscopic immersion screen, although, alternatively, it may include any suitable screen. [0071] [0071] Portable controllers 230 can include interactive features that the user can manipulate to interact with the virtual robotic surgical environment. For example, portable controllers 230 may include one or more buttons, triggers, touch screen features, scroll balls, keys, and / or other suitable interactive features. In addition, portable controllers 230 can have any of several shape factors, such as a stick, tweezers, generally round shapes (for example, ball or egg shapes), etc. In some variations, the graphic representations 230 'displayed on the head-mounted screen 220 and / or on the external screen 240 can generally mimic the form factor of actual actual handheld controllers 230. In some variations, the handheld controller may include a device carried (eg wand, remote device etc.) and / or a garment worn in the user's hand (eg gloves, rings, bracelets, etc.) and including sensors and / or configured to cooperate with external sensors for thus tracking the user's hands, individual fingers, wrists, etc. Other suitable controllers can be used additionally or alternatively (for example, sleeves) configured to provide tracking of the user's arm (s)). LAPAROSCOPIC HAND CONTROLLER [0072] [0072] In some variations, as shown in the schematic illustration of Figure 5A, the hand controller 230 may additionally include at least one tool feature 232 that is representative of at least a portion of the laparoscopic hand tool, thereby forming a laparoscopic controller 234 handheld that can be used to control the virtual manual laparoscopic tool. In general, for example, tool feature 232 may work to adapt hand controller 230 into a substantially similar controller in shape (e.g., user feel and touch) for a manual laparoscopic tool. The handheld laparoscopic controller 234 can be coupled in communication mode to the virtual reality processor 210 to manipulate the virtual manual laparoscopic tool in the virtual robotic surgical environment, and can help allow the user to feel as if he or she is using a current manual laparoscopic tool while interacting with the virtual robotic surgical environment. In some variations the laparoscopic handheld device may be a prototype (for example, facsimile or generic version) of a manual laparoscopic tool, although in other variations the laparoscopic handheld device may be a working laparoscopic tool. Movements of at least a portion of the handheld laparoscopic controller can be mapped by the virtual reality controller to match movements of a virtual manual laparoscopic tool. Thus, in some variations, the virtual reality system can simulate the use of a manual laparoscopic tool for manual MIS. [0073] [0073] As shown in Figure 5A, the laparoscopic handheld controller 234 can be used with a simulated patient configuration to further simulate the feel of a virtual manual laparoscopic tool. For example, the laparoscopic handheld controller 234 can be inserted into the cannula 250 (for example, a current cannula used in MIS procedures to provide a realistic feel of a hand tool inside the cannula, or an appropriate representation of them, such as a tube with a light to receive a portion of the tool shaft of the handheld laparoscopic controller 234). The cannula 250 can be arranged in a prototype of the patient's abdomen 260, such as a foam body with one or more fields or insertion ports to receive the cannula [0074] [0074] Additionally, as shown in Figure 5B, the virtual reality processor can generate a virtual robotic surgical environment that includes a virtual laparoscopic manual tool 236 'and / or a virtual cannula 250' with respect to a virtual patient (for example, the graphic representation 250 'of the cannula illustrated as inserted in the virtual patient). As such, the virtual environment with the virtual manual laparoscopic tool 236 'and the virtual cannula 250' can be displayed on an immersion screen provided by the screen mounted on the head 220, and / or on the external screen 240. A calibration procedure can be realized to map the handheld laparoscopic controller 234 to a virtual manual laparoscopic tool 236 'within the virtual environment. In this way, as the user moves and manipulates the handheld laparoscopic controller 234, the combination of at least one tool feature 234 and the simulated patient configuration can allow the user to have the tactile feeling as if he or she was using the manual laparoscopic tool in the virtual robotic surgical environment. Likewise, as the user moves and manipulates the handheld laparoscopic controller 234, the corresponding movements of the virtual manual laparoscopic tool 236 'can allow the user to view the simulation that he or she is using the manual laparoscopic tool in the virtual robotic surgical environment. [0075] [0075] In some variations, the calibration procedure for the handheld laparoscopic controller in general maps the handheld laparoscopic controller 234 to the 236 ’virtual manual laparoscopic tool. For example, in general, the calibration procedure can [0076] [0076] In some variations, as shown in Figure 5B, the system can include not only a handheld controller 230, but also a laparoscopic handheld controller 234. In this way, the virtual reality processor can generate a virtual environment that includes no only a graphic representation 230 'of a hand controller 230 (with the laparoscopic fixation), but also the virtual manual laparoscopic tool 236' as described above. The handheld controller 230 can be coupled in communication mode to the virtual reality processor 210 to manipulate at least one virtual robotic arm, and the laparoscopic handheld controller 234 can be coupled in communication mode to the virtual reality processor 210 to manipulate the 236 'virtual manual laparoscopic tool. Thus, in some variations, the virtual reality system can simulate an "over the bed" way of using a robotic surgical system, in which an operator is on the patient's side and manipulates not only a robotic arm (for example, with one hand ) providing MIS with robotic assistance, but also a manual laparoscopic tool providing manual MIS. [0077] [0077] Tool feature 232 can include any suitable general feature that approaches or represents a portion of a manual laparoscopic tool. For example, tool feature 232 can generally approximate a laparoscopic axis to the tool (for example, it includes an elongated member extending from a hand portion of the controller). As another example, tool feature 232 may include a trigger, button, or other interactive laparoscopic features similar to those present on the manual laparoscopic tool that engages interactive features on the hand controller 230, but provides a realistic form factor mimicking the feel of the manual laparoscopic tool (for example, tool feature 232 can include greater triggering having a realistic shape factor that is superimposed with and engages the interactive generic features on hand controller 230). As yet another example, tool feature 232 can include that the selected materials and / or masses create a laparoscopic handheld controller 234 having a weight distribution that is similar to a particular type of laparoscopic hand tool. In some variations, tool feature 232 may include plastic (for example, polycarbonate, acrylonitrile butadiene styrene (ABS), nylon, etc.) that is injection molded, machined, 3D printed, or other suitable material formed in any way proper. In other variations, tool feature 232 may include metal or other suitable material that is machined, cast, etc. [0078] [0078] In some variations, tool feature 236 can be an adapter or other fixture that is formed separately from hand controller 230 and attached to hand controller 230 by means of fasteners (eg screws, magnets, etc. .), interlocking characteristics (for example, threads or fitting characteristics such as flaps and slits, etc.), epoxy, welding (for example, ultrasonic welding), etc. Tool feature 236 can be reversibly coupled to hand controller 230. For example, tool feature 236 can be selectively attached to hand controller 230 in order to adapt a hand controller 230 when a style hand controller laparoscopic 230 is desired, while tool feature 236 can be selectively highlighted from the hand controller 230 when the laparoscopic style hand controller 230 is not desired. Alternatively, tool feature 236 can be permanently attached to hand portion 234, such as during manufacture. Additionally, in some variations, the hand portion 234 and the tool characteristic 236 can be integrally formed (for example, injection molded together with a single part). [0079] [0079] An example of a variation of a handheld laparoscopic controller is shown in Figure 6A. The laparoscopic hand controller 600 may include a hand portion 610 (for example, similar to the hand controller 230 described above), a tool shaft 630, and a shaft adapter 620 for attaching a tool shaft 630 to the hand portion 610. As shown in Figure 6B, the handheld laparoscopic controller 600 can generally be used to control a virtual handheld laparoscopic stapler tool 600 ', although the handheld laparoscopic controller 600 can be used to control other types of virtual handheld laparoscopic tools. (for example, scissors, dissectors, tweezers, needle holders, probes, forceps, biopsy tools, etc. For example, the 610 hand portion can be associated with a virtual rod 610 'of the 600' virtual laparoscopic stapler tool having a staple terminal effector 640 ', so that the user's manipulation of the hand portion 610 is mapped to manipulation of the virtual stem 610'. Similarly, a tool shaft 630 p It can correspond to a 630 'virtual tool axis of the 600' virtual manual laparoscopic stapler tool. A tool axis 630 and the virtual tool axis 630 'can be inserted into the cannula and the virtual cannula, respectively, so that movement of a tool axis 630 with respect to the cannula is mapped to the movement of the virtual tool axis 630 'inside the virtual cannula in the virtual robotic surgical environment. [0080] [0080] Hand portion 610 can include one or more interactive features, such as finger trigger 612 and / or button 614, which can receive user information input from fingers, user palms, etc. and be coupled in communication mode to a virtual reality processor. In said example of embodiment, the finger trigger 612 can be mapped to a virtual trigger 612 'on the virtual manual laparoscopic stapler tool 600'. The virtual trigger 612 'can be viewed as triggering the virtual terminal effector 640' (for example, causing the virtual members of the virtual terminal effector 640 'to close and trigger the clamps) to staple virtual tissue in the virtual environment. In this way, when the user pulls the finger trigger 612 on the laparoscopic handheld controller, the signal from the finger trigger 612 can be communicated to the virtual reality processor, which modifies the 600 'virtual manual laparoscopic stapler tool to interact within the virtual environment in simulation of a current manual laparoscopic stapler tool. In another variation, a trigger fixation can physically resemble (for example, in shape and form) the virtual trigger 612 'on the virtual laparoscopic stapler tool 600' and can be attached to the finger trigger 612, which can allow the 600 handheld laparoscopic controller further mimic the user's feel of the 600 'virtual manual laparoscopic stapler tool. [0081] [0081] As shown in Figures 6C to 6E, shaft adapter 620 can generally work to couple a tool shaft 630 to hand portion 610, which can, for example, adapt a hand controller (similar to the hand controller 210 described above) within a laparoscopic handheld controller. The shaft adapter 620 may generally include a first end for coupling to the hand portion 610 and a second end for coupling to a tool shaft 630. As best shown in Figure 6E, the first end of the shaft adapter 620 may include a proximal portion 620a and a distal portion 620b configured to staple in the characteristic of hand portion 610. For example, hand portion 610 may generally include a ring-shaped portion defining a central space 614 that receives proximal portion 620a and a distal portion 620b. The proximal portion 620a and the distal portion 620b can staple on each side of the ring-shaped portion in its inner diameter, and be attached to the ring-shaped portion by means of fasteners (not shown) passing through fastener holes 622, thereby fixing the shaft adapter 620 to the hand portion 610. Additionally or alternatively, the shaft adapter 620 can couple to the hand portion 610 in any suitable mode, such as an interference fit, epoxy, interlocking characteristics (for example , between the proximal portion 620a and the distal portion 620b), etc. As also shown in Figure 6E, the second end of the axis adapter 620 may include a recess for receiving a tool axis 620. For example, the recess may be generally cylindrical for receiving a generally cylindrical end of a portion of the axis of tool 630, such as through a pressure fitting, friction fitting, or other interference fitting. Additionally or alternatively, a tool shaft 620 can be coupled to the shaft adapter 620 with fasteners (eg screws, pins, epoxy, ultrasonic welding, etc.). A 630 tool shaft can be any suitable size (for example, length, diameter) to mimic or represent the manual laparoscopic tool. [0082] [0082] In some variations, the axis adapter 620 may be selectively removable from the hand portion 610 to allow selective use of the hand portion 610 not only as an independent hand controller (e.g. 210 hand controller) , but also as a laparoscopic handheld controller 600. In addition or alternatively, a tool shaft 630 can be selectively removable from the shaft adapter 620 (for example, although the shaft adapter 620 may be intentionally attached to the hand portion 610 , a tool spindle 620 can be selectively removable from the spindle adapter 620 to convert the laparoscopic handheld control 600 to an independent handheld controller 210). [0083] [0083] In general, the tool characteristic of the handheld laparoscopic controller 600, such as the shaft adapter 620 and a tool shaft 630, can be produced from a rigid or semi-rigid plastic or metal, and can be formed through any proper manufacturing process, such as 3D printing, injection molding, crushing, turning, etc. The tool feature can include multiple types of materials, and / or weights or other masses to additionally simulate the feel of the user of a particular manual laparoscopic tool. SYSTEM VARIATIONS [0084] [0084] One or more aspects of the virtual reality system described above can be incorporated into other variations of systems. For example, in some variations, a virtual reality system to provide a virtual robotic surgical environment can interface with one or more features of a real robotic surgical environment. For example, as shown in Figure 3, a system 700 can include one or more processors (for example, a virtual reality processor 210) configured to generate a virtual robotic surgical environment, and one or more 750 sensors in a robotic surgical environment, where the one or more sensors 750 are in communication with the one or more processors. Sensor information from the robotic surgical environment can be configured to detect status of an aspect of the robotic surgical environment, in order to mimic or replicate the characteristics of the robotic surgical environment in the virtual robotic surgical environment. For example, a user can monitor a current robotic surgical procedure in the real operating room through a virtual reality system that interfaces with the real operating room (for example, the user can interact with a virtual reality environment that is a reflection of conditions in the actual operating room). [0085] [0085] In some variations, one or more 750 sensors can be configured to detect status of at least one robotic component (for example, a component of a robotic surgical system, such as a robotic arm, a tool driver attached to an arm robotic, a patient operating table to which a robotic arm is attached, a control tower, etc.) or another component of a robotic surgical operating room. This status can indicate, for example, position, orientation, speed, speed, operating status (for example, on or off, power level, mode), or any other suitable status of the component. [0086] [0086] For example, one or more accelerometers can be coupled to a robotic arm link and be configured to provide information about the position, orientation, and / or the speed of movement of the robotic arm link, etc. Multiple accelerometers on multiple robotic arms can be configured to provide information regarding imminent and / or present collisions between the robotic arms, between different connections of a robotic arm, or between a robotic arm and a nearby obstacle having a known position. [0087] [0087] As another example, one or more proximity sensors (for example, infrared sensor, capacitive sensor) can be coupled to the portion of a robotic arm or other components of the robotic surgical system or surgical environment. Said proximity sensors can, for example, be configured to provide information regarding imminent collisions between objects. In addition or alternatively, contact or touch sensors can be coupled to the portion of a robotic arm or other components of the robotic surgical environment, and can be configured to provide information regarding the present collision between objects. [0088] [0088] In another example, one or more components of the robotic surgical system or surgical environment may include markers (for example, infrared markers) to facilitate optical tracking of the position, orientation, and / or speed of various components, such as with sensors monitoring the markers in the surgical environment. Similarly, the surgical environment may additionally or alternatively include cameras for reading and / or modeling the surgical environment and its content. Said optical tracking sensors and / or cameras can be configured to provide information regarding imminent and / or present collisions between objects. [0089] [0089] As another example, one or more 750 sensors can be configured to detect the status of a patient, a surgeon, or other surgical staff. This status can indicate, for example, position, orientation, speed, speed, and / or biological metrics such as heart rate, blood pressure, temperature, etc. For example, a heart rate monitor, blood pressure monitor, thermometer, and / or oxygenation sensor, etc. they can be attached to the patient and allow a user to observe the patient's condition. [0090] [0090] In general, in said variations, a virtual reality processor 210 can generate a virtual robotic surgical environment similar to that described elsewhere in this document. In addition, when receiving status information from one or more sensors 750, the virtual reality processor 210 or another processor in the system can incorporate the detected status in any one or more suitable modes. For example, in a variation, the virtual reality processor 210 can be configured to generate a replica or almost a virtual reality replica of a robotic surgical environment and / or a robotic surgical procedure performed on it. For example, one or more 750 sensors in the robotic surgical environment can be configured to detect a status of a robotic component that corresponds to a virtual robotic component in the virtual robotic surgical environment (for example, the virtual robotic component can be substantially representative of the component robotic in form and / or visual function). In this variation, the virtual reality processor 210 can be configured to receive the detected status of the robotic component, and then modify the virtual robotic component based at least in part on the detected status so that the virtual robotic component mimics the robotic component. For example, if a surgeon moves a robotic arm during a robotic surgical procedure to a particular pose, then a virtual robotic arm in the virtual environment can move accordingly. [0091] [0091] As another example, the virtual reality processor 210 can receive status information that indicates an alarm event, such as an imminent or present collision between objects, or a poor health condition of the patient. Upon receiving said information, the virtual reality processor 210 can provide a warning or alarm to the user of the occurrence of the event, such as by displaying a visual alert (for example, text, icon indicating collision, a view within the virtual environment illustrating a collision, etc.), audio alert, [0092] [0092] As yet another example, one or more sensors in the robotic surgical environment can be used to compare a current surgical procedure (occurring in the non-virtual robotic surgical environment) with a surgical procedure planned as planned in a virtual robotic surgical environment. For example, an expected position of at least one robotic component (for example, robotic arm) can be determined during surgical pre-planning, as viewed as a corresponding virtual robotic component in a virtual robotic surgical environment. During a current surgical procedure, one or more sensors can provide information on a measured position of the current robotic component. Any differences between the expected position and the measurement of the robotic component may indicate deviations from a surgical plan that was built in the virtual reality environment. Since these deviations can possibly result in unintended consequences (for example, unintended collisions between robotic arms, etc.), the identification of deviations can allow the user to adjust the surgical plan in this way (for example, reconfigure the approach to surgical field, changing surgical instruments, etc.). USER MODES [0093] [0093] In general, the virtual reality system may include one or more user modes to allow a user to interact with the virtual robotic surgical environment by moving and / or manipulating portable controllers 230. Such interactions may include, for example , move virtual objects (eg, virtual robotic arm, virtual tool, etc.) in the virtual environment, add camera views to view the virtual environment simultaneously from multiple vantage points, navigate within the virtual environment without needing to the user moves the screen mounted on the head 220 (for example, per floor), etc. as further described below. [0094] [0094] In some variations, the virtual reality system may include a plurality of user modes, where each user mode is associated with a respective subset of user interactions. As shown in Figure 8, at least some of the user modes can be shown on a screen (for example, head mounted screen 220) for selection by the user. For example, at least some of the user modes can correspond to selectable user mode 812 icons displayed in a user mode 810 menu. The user mode 810 menu can be superimposed on the screen of the virtual robotic surgical environment so that a 230 'graphical representation of the hand controller (or user's hand, another suitable representative icon, etc.) can be maneuvered by the user to select a user mode icon, thereby activating the user mode corresponding to the mode icon selected user. As shown in Figure 8, user mode icons 812 can generally be arranged in a palette or circle, but can alternatively be arranged in a grid or other suitable arrangement. In some variations, a selected subset of possible user modes can be displayed in menu 810 based, for example, on user preferences (for example, associated with a set of user login information), user preferences similar to the user current, type of surgical procedure, etc. [0095] [0095] Figure 14 illustrates an operating method 1400 of an example of variation of a user mode menu that provides selection of one or more user mode icons. To activate the user menu, the user can activate an information entry for the user method associated with the menu. For example, an input method can be activated by a user picking up with a hand controller [0096] [0096] For example, in a variation in which a hand controller includes a circular menu button and the graphical representation of the hand controller also has a circular menu button displayed in the virtual reality environment, the mode icon structure user can be centered around and aligned with the menu mode button whereas the normal vectors of the menu plan and menu button are substantially aligned. The circular or radial menu can include, for example, multiple different menu regions (1414) or sectors, each of which can be associated with an angle range (for example, an arcuate segment in the circular menu) and a mode icon (for example, as shown in Figure 8). Each region can be switched between a selected state and an unselected state. [0097] [0097] The 1400 method can generally include determining the selection of a user mode by the user and receiving confirmation that the user can prefer to activate the selected user mode for the virtual reality system. [0098] [0098] After determining that a user has selected a particular user mode icon, the method may, in some variations, transport that selection to the user (for example, as confirmation) by visual and / or auditory indications. For example, in some variations, the method may include providing one or more visual cues (1430) in the virtual reality environment displayed in response to determining that a user has selected a user mode icon. As shown in Figure 14, examples of visual cues (1432) include modifying the appearance of the selected user mode icon (and / or the arcuate segment associated with the selected user mode icon) with highlights (for example, thickened outlines) , animation (e.g., oscillating lines, "dancing" or "pulsating" icon), change in size (e.g., magnification of the icon), change in apparent depth, change in color or opacity (e.g., more or less translucent, change in filling in the pattern of the icon), change of position (for example, move radially outward or into the central origin, etc.) and / or any suitable visual modification. In some variations, indicating the user in this or any appropriate manner can inform the user which user mode will be activated, before the user confirms the selection of a specific user mode. For example, the method may include providing one or more visual cues (1430) as the user navigates or scrolls through the various user mode icons in the menu. [0099] [0099] The user can confirm the approval of the selected user mode icon in one or more different ways. For example, the user can release or disable the user method information entry (1440) associated with the menu (for example, releasing a button on the hand controller, disengaging a foot pedal), such as to indicate approval of the selected user mode. In other variations, the user can confirm the selection by hovering over the selected user mode icon for at least a predetermined period of time (for example, at least 5 seconds), double-clicking on the associated user method information entry with the user menu (for example, double-clicking the button, etc.), speaking a verbal command indicating approval, etc. [00100] [00100] In some variations, upon receiving confirmation that the user approves the selected user mode, the method may include checking which user mode icon has been selected. For example, as shown in Figure 14, a test to check which user mode icon was selected can include one or more tips, which can be satisfied in any suitable order. For example, in variations in which user mode icons are arranged in a generally circular palette around a central origin, the method may include determining the radial distance from the central origin (1442) and / or the angular orientation graphical representation of the hand controller with respect to the central origin (1446) when the user indicates the approval of the user mode icon selection. In some variations, the tips (1442) and (1446) may be similar to those (1422) and (1424) described above, respectively. If at least one of the said tips (1442) and (1444) is not satisfied, then the release of the user method information entry can correlate to a non-selection of a user mode icon (for example, the user can have changed their mind about selecting a new user mode). Thus, if the graphical representation of the hand controller fails to satisfy a distance threshold (1442) then the original or previous user mode can be retained (1444). Similarly, if the graphical representation of the hand controller fails to match an arcuate menu segment (1446), then the original or previous user mode can be retained (1448). If the graphic representation of the hand controller meets a distance threshold (1442) and corresponds to an arcuate segment of the menu, then the selected user mode can be activated (1450). In other variations, a user mode can additionally or alternatively be selected with other interactions, such as voice command, eye tracking by sensors, etc. In addition, the system may additionally or alternatively suggest activating one or more user modes based on criteria such as user activity on (for example, if the user is frequently turning his head to see details at the end of his field of vision, the system can suggest a user mode that allows the positioning of a camera to provide a view of the upper screen window from a desired point of view, as described below), type of surgical procedure, etc. OBJECT HOLD [0100] [0100] An example of a user mode with the virtual robotic surgical environment allows a user to pick up, move, or otherwise manipulate virtual objects in the virtual environment. Examples of manipulable virtual objects include, but are not limited to, virtual representations of physical items (for example, one or more virtual robotic arms, one or more virtual tool drivers, virtual laparoscopic hand tools, virtual patient operating table, or other surface rest room, virtual control tower or other equipment, virtual user console, etc.) and other virtual or graphic constructions such as portals, window screens, patient images or other projections on an enlarged screen, etc. which are further described below. [0101] [0101] At least some of the virtual objects can include or be associated with at least one virtual touch point or selectable feature. When the virtual touch point is selected by a user, the user can move (for example, adjust the position and / or orientation) the virtual object associated with the selected virtual touch point. In addition, multiple virtual touch points can be selected simultaneously (for example, with multiple portable controllers 230 and their graphical representations 230 ') on the same virtual object or multiple separate virtual objects. [0102] [0102] The user can in general select a virtual touch point by moving a hand controller 230 to correspondingly move the graphical representation 230 'to the virtual touch point in the virtual environment, then engage the interactive features such as a trigger or button on hand controller 230 to indicate selection of the virtual touch point. In some variations, a virtual touch point may remain selected as long as the user engages the interactive features on the hand controller 230 (for example, as long as the user pulls a trigger) and may become unselected when the user releases the interactive features . For example, the virtual touchpoint may allow the user to "click and drag" the virtual object through the virtual touchpoint. In some variations, a virtual touch point can be switched between a selected state and an unselected state, where a virtual touch point can remain selected after a single engagement of the interactive features on the hand controller until a second engagement of the features interactive switch the virtual touch point to an unselected state. In the virtual robotic surgical environment, one or both types of virtual touch points may be present. [0103] [0103] A virtual object can include at least one virtual touch point for direct manipulation of the virtual object. For example, a virtual robotic arm in the virtual environment can include a virtual touch point in one of its virtual arm connections. The user can move a hand controller 230 until the hand controller 230 'graphical representation is in close proximity to (for example, hovering over) the virtual touch point, engaging a trigger or other interactive features on the hand controller 230 to select the virtual touch point, then move the hand controller 230 to manipulate the virtual robotic arm via the virtual touch point. In this way, the user can manipulate the hand controller 230 in order to reposition the virtual robotic arm in a new pose, such as to create a more spacious workspace in the virtual environment by the patient, virtual robotic arm motion test range to determine the likelihood of collisions between the virtual robotic arm and other objects, etc. [0104] [0104] A virtual object can include at least one virtual touch point that is associated with a second virtual object, for indirect manipulation of the second virtual object. For example, a virtual control panel can include a virtual touch point on a virtual key or button that is associated with a patient's operating table. The virtual key or button can, for example, control the height or angle of the virtual patient's operating table in the virtual environment, similar to how a key or button on a real control panel must electronically or mechanically modify the height or angle of a real operating table for the patient. The user can move a hand controller 230 until the 230 ’graphical representation of the hand controller is in close proximity (for example, hovering over) [0105] [0105] When a virtual touch point is selected, the virtual reality processor can modify the virtual robotic surgical environment to indicate to the user that the virtual touch point is indeed selected. For example, the virtual object that includes the virtual touch point can be highlighted because it is graphically proportioned in a different color (for example, blue or red) and / or outlined in a different line in weight or color, in order to visually contrast with the affected virtual object from other virtual objects in the virtual environment. Additionally or alternatively, the virtual reality processor can provide audio feedback (for example, a tone, beep, or verbal acquaintance) through an audio device that indicates selection of the virtual touch point, and / or tactile feedback (for example, vibration) through a hand controller 230, a head mounted screen 220, or other suitable device. NAVIGATION [0106] [0106] Another example of user modes with the virtual robotic surgical environment can allow a user to browse and explore the virtual space within the virtual environment. PRESSURE POINTS [0107] [0107] In some variations, the system may include a user mode allowing "pressure points", or virtual targets within a virtual environment that can be used to assist the user's navigation within the virtual environment. A pressure point can, for example, be arranged in a location selected by the user or default within the virtual environment and allows a user to quickly navigate to that location with the selection of the pressure point. A pressure point can, in some variations, be associated with an orientation within the virtual environment and / or an apparent scale (zoom level) of the environment screen from that vantage point. Pressure points can, for example, be visually indicated as colored dots or other colored markers graphically displayed in a first person perspective view. By selecting a pressure point, the user can be transported to the advantage point of the selected pressure point within the virtual robotic surgical environment. [0108] [0108] For example, Figure 16 illustrates the 1600 method of operation of an example of changing a user mode allowing pressure points. As shown in Figure 16, a pressure point can be positioned (1610) in the virtual environment by a user or as a predetermined setting. For example, a user can navigate through a user mode menu as described above, and select or "pick up" a pressure point icon from the menu with a hand controller (for example, indicated with a colored dot or another suitable marker) and drag and drop the pressure point icon to a desired location and / or orientation in the virtual environment. The pressure point can, in some variations, be repositioned by the user when selecting the pressure point again (for example, moving the graphical representation of the hand controller until it intersects with the pressure point or a volume limit of collision around the pressure point, then engage a characteristic input such as a button or trigger) and drag and drop the pressure point icon to a new desired location. In some variations, the user can configure the scale or zoom level of the vantage point (1620) associated with the pressure point, in order to adjust a displayed scroll bar or mouse wheel, movements as described above to configure a level scale for manipulation of the environmental view, etc. The pressure point may, in some instances, have a default scale level associated with all or a subcategory of pressure points, the scale level associated with the user's current vantage point when the user places the pressure point, or adjusted as described above. In addition, once a pressure point is set, the pressure point can be stored (1630) in memory (for example, local or remote storage) for future access. A pressure point can, in some variations, be deleted from the virtual environment and from memory. For example, a pressure point can be selected (in a similar way as by repositioning the pressure point) and designed for deletion by dragging it off the screen to a predetermined location (for example, virtual waste container) and / or move it at a predetermined speed (for example, "thrown" in a direction away from the user's vantage point faster than a predetermined threshold), selection of a secondary menu option, voice command, etc. [0109] [0109] Once one or more pressure points for a virtual environment are stored in memory, the user can select one of the stored pressure points (1640) to use. For example, with the selection of a stored pressure point, the user's vantage point can be adjusted to the position, orientation, and / or scale of the selected pressure point settings (1650), thereby allowing the user to feel as if he were transporting himself to the location associated with the selected pressure point. In some variations, the user's previous vantage point can be stored as a pressure point (1660) to allow easy "undo" of the user's perceived teleportation and transition the user back to their previous point of view. This pressure point can be temporary (for example, disappear after a predetermined period of time, such as after 5 to 10 seconds). In some examples, the user's previous vantage point can be stored as a setpoint only if the user's previous location is not a pre-existing setpoint. In addition, in some variations, a virtual track or trajectory (for example, line or arc) can be displayed in the virtual environment by connecting the user's previous point of view to the user's new point of view associated with the selected docking point, which can , for example, provide the user with the context of how he teleported in the virtual environment. This visual indication can be removed from the virtual environment display after a predetermined period of time (for example, after 5-10 seconds). [0110] [0110] Generally, in some variations, a pressure point can operate in a similar way to the portals described below, except that a pressure point can indicate a vantage point without providing a window view of the virtual environment. For example, pressure points can be placed at advantage points selected by the user outside and / or inside the virtual patient and can be linked to one or more trajectories similar to the portals, as described above. In some variations, pressure point trajectories can be defined by the user in a similar way as described below for portals. PORTALS [0111] [0111] In some variations, the system may include a user mode that facilitates the placement of one or more portals, or teleportation points, in the locations selected by the user in the virtual environment. [0112] [0112] For example, as generally described above, the system can display a first-person perspective view of the virtual robotic surgical environment from a first vantage point within the virtual robotic surgical environment. The user can navigate through a menu to select a user mode that allows the placement of a portal. As shown in Figure 9A, the user can manipulate the 230 'graphical representation of the hand controller to position a 910 portal in a selected location in the virtual environment. For example, the user can engage the feature (for example, trigger or button) on the hand controller while a portal placement permission user mode is activated, so that while the feature is engaged and the user moves the position and / or hand controller guidance, a 910 portal can appear and be moved within the virtual environment. One or more portal placement indicators 920 (for example, one or more arrows, a line, an arc, etc. connecting the graphical representation 230 'to a possible portal location) can help communicate to the user the possible location of a 910 portal, as well as for helping with depth perception. The size of the 910 portal can be adjusted by "grabbing" and stretching or shrinking the sides of the 910 portal via handheld controllers. When the location of portal 910 is confirmed (for example, by the user releasing the activated feature on the hand controller, double-clicking, etc.), the user's apparent location in the virtual environment can be updated to match the vantage point associated with the portal 910. In some variations, as described below, at least some vantage points within the virtual site may be prohibited. Said prohibited vantage points can be stored in memory (for example, local or remote storage). In these variations, if a location of the 910 portal is confirmed in a prohibited location (for example, compared and corresponded to a list of prohibited vantage points stored in memory), the user's apparent location in the virtual environment can be maintained without changes. However, if a 910 portal location is confirmed as allowed (for example, compared and not matched among the list of prohibited views), the user's apparent location in the virtual environment can be updated as described above. [0113] [0113] In some variations, once the user has placed the 910 portal at a desired vantage point, a window view of the virtual environment from the vantage point of the placed 910 portal can be displayed within the 910 portal, thus offering a "preview" of the view offered by portal 910. The user can, for example, see through portal 910 with full parallax, so that portal 910 behaves like a type of magnifying glass. For example, while looking through portal 910, the user can view the virtual environment as if the user had been scaled to the inverse of the portal scale factor (which affects interpupillary distance and focal length) and as if the user had been translated into the reciprocal of the portal scale factor (1 / portal scale factor) from the distance from portal 910 to the user's current location. In addition, the 910 portal can include an "event horizon" that can be a texture in a plane that is provided, for example, using one or more additional cameras (described below) in the virtual environment scene positioned as described above. In these variations, when "traveling" through the 910 portal after selecting the 910 portal for teleportation, the user's view of the virtual environment can naturally converge with the user's apparent vantage point during the user's approach to the portal, since the point advantage of the user is shifted as a fraction of the portal distance (by 1 / portal scale factor). Consequently, the user can feel as if he / she is entering smoothly and naturally in the visualization of the virtual environment at the scale factor associated with the selected portal. [0114] [0114] As shown in Figure 9A, in some variations, portal 910 can generally be circular. However, in other variations, one or more 910 portals can be any suitable shape, such as elliptical, square, rectangular, irregular, etc. Additionally, the view of the virtual environment window that is displayed on the portal can display the virtual environment at a scale factor associated with the portal, so that the view of the virtual environment displayed on different portals can be displayed at different zoom levels ( for example, 1x, 1.5x, 2x, 2.5x, 3x, etc.), thereby also changing the user's scale with respect to the environment. A scale factor of the window view in a portal can also indicate or correspond to the scale of the view that would be displayed if the user were transported to that vantage point of the portal. For example, if the view of the virtual environment outside a virtual patient is about 1x, then the view of the virtual environment window within the virtual patient can be about 2x or more, thereby providing a user with more details of the internal tissue of the virtual patient. The scale factor can be defined by the user or predetermined by the system (for example, based on the location of the portal in the virtual environment). In some variations, a scale factor may correlate with the displayed size of the 910 portal, although in other variations, the scale factor may be independent of the size of the portal. [0115] [0115] In some variations, a 910 portal can be arranged at substantially any vantage point in the virtual environment that the user desires. For example, a 910 portal can be placed anywhere on a virtual terrain surface of a virtual operating room or on a virtual object (for example, table, chair, user console, etc.). As another example, as shown in Figure 9B, a portal 910 can be arranged in the middle at any suitable elevation above the virtual terrain surface. As yet another example, as shown in Figure 9C, a portal can be arranged on or within a virtual patient, such as portals 910a and 910b which are arranged on a patient's abdomen and allow views of the intestines and other internal organs of the virtual patient (for example, simulated augmented reality). In the above example, the virtual patient can be generated from medical image and other information for a real (non-virtual) patient, so that portals 910a and 910b can allow the user to have an immersion view of an accurate representation of the real tissue of the patient (for example, to see tumors, etc.), and / or generated from internal virtual cameras (described below) arranged within the patient. In some variations, the system may limit the placement of a 910 portal according to predefined guidelines (for example, just outside the patient or only inside the patient), which may correspond, for example, to a type of simulated surgical procedure or to a level of training (for example, "beginner" or "advanced" user level) associated with the virtual environment. These prohibited places can be indicated to the user, for example, by a visual change in the 910 portal as it is placed (for example, changing the color of the outline, showing a gray or opaque view of the window inside the door 910 while it is being placed) and / or auditory indications (for example, beep, tone, verbal feedback). In yet other variations, the system may additionally or alternatively include one or more 910 portals placed in predetermined locations, such as on a virtual user console in the virtual environment adjacent to the patient's virtual table, etc. Said predetermined locations can, for example, depend on the type of procedure, that is, saved as part of a configuration file. [0116] [0116] A 910 portal can be visible from any "side" (for example, front and back side) of the portal. In some variations, the view on one side of portal 910 may be different from the opposite side of portal 910. For example, when viewed from the first side (for example, from the front) of portal 910, the portal may provide a view of the virtual environment with a scale factor and parallax effects, as described above, while when viewed from a second side (eg from behind) of the 910 portal, the portal can provide a view of the virtual environment with a scale factor of about one. As another example, the portal can provide a view of the virtual environment with a scale factor and parallax effects when viewed from the first side and the second side of the portal. [0117] [0117] In some variations, multiple 910 portals can be linked in sequence to include the trajectory in the virtual environment. For example, as shown in Figure 9C, a first-person perspective view of the virtual robotic surgical environment from a first vantage point can be displayed (for example, an immersion view). The user can have a first 910a portal at a second vantage point that is different from the first vantage point (for example, closer to the virtual patient than the first vantage point) and a first view of the surgical environment window virtual robot from the second vantage point can be displayed on the first portal 910a. Similarly, the user can arrange a second 910b portal at a third vantage point (for example, closer to the patient than the first and second vantage points), and a second window view of the virtual robotic surgical environment can be displayed on a second 910b portal. The user can provide a user information entry by associating the first and second portals 910a and 910b (for example, by selection with portable controllers, drawing a line between the first and second portals with portable controllers, etc.) so that the first and second portals are linked in sequence, thereby generating the path between the first and second portals. [0118] [0118] In some variations, after several 910 portals are linked to generate a trajectory, transport along the trajectory may not require the explicit selection of each sequential portal. For example, once in the trajectory (for example, at the second vantage point), travel between linked portals can be accomplished by triggering a trigger, button, touch pad, scroll wheel, another interactive feature of the hand controller, voice command, etc. [0119] [0119] Additional portals can be linked in a similar way. For example, two, three or more portals can be linked in series to generate an extended trajectory. As another example, several portals can form "branched" trajectories, in which at least two trajectories share at least one common portal, but otherwise, each trajectory has at least one exclusive portal for that trajectory. As another example, several portals can form two or more trajectories that do not share portals in common. The user can select the trajectory on which to travel, such as when using hand controllers and / or voice commands, etc. One or more paths between portals can be visually indicated (for example, with a dotted line, color code of the portals along the same path, etc.), and this visual path indication can be activated and deactivated, as based on user preference. [0120] [0120] Other features of the portal can facilitate easy navigation of trajectories between portals. For example, a portal may change color when the user enters and passes through that portal. As shown in Figure 9C, in another example, a portal itself can be displayed with direction arrows indicating the allowed direction of the path including that portal. In addition, the displacement along the trajectories can be performed with an "undo" command (via hand controllers and / or voice command, etc.) that returns the user to the previous vantage point (for example, displays the view of the virtual environment from the previous point of view). In some variations, an initial or standard vantage point can be established (such as according to the user's preference or system settings) to allow the user to quickly return to that initial advantage point with a shortcut command, such as a interactive feature on a handheld controller or a voice command (for example, "Reset my position"). For example, an initial or standard view can be on a user's virtual console or adjacent to the virtual patient table. [0121] [0121] The user mode that facilitates the placement and use of portals, or another separate user mode, can additionally facilitate the deletion of one or more portals. For example, a portal can be selected for deletion with portable controllers. As another example, one or more portals can be selected for deletion via voice command (for example, "delete all portals" or "delete portal A"). FREE NAVIGATION [0122] [0122] The system can include a user mode that facilitates free navigation around the virtual robotic surgical environment. For example, as described in this document, the system can be configured to detect the user's walking movements based on sensors on a head-mounted screen and / or portable controllers, and can correlate the user's movements in repositioning within a room of virtual operation. [0123] [0123] In another variation, the system may include a flight mode that allows the user to quickly navigate the virtual environment in a "flying" manner at different elevations and / or speeds and at different angles. For example, the user can navigate in flight mode by directing one or more hand controllers and / or the headset in the desired direction for the flight. Interactive features on the hand controller can further control the flight. For example, a directional pad or touch pad can provide control for forward, backward, side-to-side movements, etc., while maintaining substantially the same perspective view of the virtual environment. The translation can, in some variations, occur without acceleration, because the acceleration can increase the probability of motion sickness by the simulator. In another user configuration, a directional keyboard or touch pad (or headset orientation) can provide control for elevating the user's apparent location in the virtual environment. In addition, in some variations similar to those described above in relation to portals, an initial or standard vantage point within the flight mode can be established to allow a user to quickly return to that initial vantage point with a shortcut command. Parameters such as flight speed in response to user input can be adjustable by the user and / or set by the system by default. [0124] [0124] In addition, in flight mode, the scale factor of the displayed view can be controlled by hand controllers. The scale factor can, for example, affect the apparent elevation of the user's location in the virtual environment. In some variations, the user can use the hand controls to separate two points in the displayed view to zoom out and zoom in two points in the displayed view to zoom in or, conversely, separate two points in the displayed view to enlarge two points in the view displayed to zoom out. Additionally or alternatively, the user can use voice commands (for example, "zoom in to 2x") to change the scale factor of the displayed view. For example, Figures 10A and 10B illustrate exemplary views of the virtual environment that are relatively "enlarged" and "reduced", respectively. Parameters such as the speed of change of the scale factor, minimum and maximum range of the scale factor, etc. they can be adjusted by the user and / or defined by the system by default. [0125] [0125] As the user freely navigates the virtual environment in flight mode, the view displayed may include features to reduce eye fatigue, nausea, etc. For example, in some variations, the system may include a "comfort mode" in which external regions of the displayed view are removed as the user navigates in flight mode, which can, for example, help reduce motion sickness. user movement. As shown in Figure 10C, when in comfort mode, the system can define a transition region 1030 between an internal transition boundary 1010 and an external transition boundary 1020 around a focal area (eg, center) of the view of the user. Within the transition region (within the 1010 internal transition limit), a normal view of the virtual robotic surgical environment is displayed. Outside the transition region (outside the 1020 outer transition boundary), a neutral view or simple background (for example, a light gray background) is displayed. Within the 1030 transition region, the displayed view may have a gradient that gradually transitions the view from the virtual environment to the neutral view. Although the transition region 1030 shown in Figure 10C is represented as generally circular, with internal and external transition limits generally circular 1010 and 1020, in other variations the internal and external transition limits 1010 and 1020 may define a transition region 1030 which is elliptical or otherwise suitable. In addition, in some variations, various parameters of the transition region, such as size, shape, gradient, etc. they can be adjusted by the user and / or defined by the system by default. [0126] [0126] In some variations, as shown in Figure 11, the user can view the virtual robotic surgical environment from the model view that allows the user to view the virtual operating room from a vantage point , with a top-down perspective. In the mockup view, the virtual operating room can be displayed in a smaller scale factor (for example, smaller than natural size) on the screen, thereby changing the user's scale with respect to the virtual operating room. The model view can provide the user with additional contextual science of the virtual environment, as the user can see the entire virtual operating room at once, as well as the arrangement of its contents, such as virtual equipment, personnel virtual, virtual patient, etc. Through the model view, for example, the user can rearrange virtual objects in the virtual operating room with broader contextual science. The model view can, in some variations, be linked in a path along the portals and / or pressure points described above. ROTATION OF THE ENVIRONMENT VIEW [0127] [0127] In some variations, the system may include a user mode that allows the user to navigate the virtual robotic surgical environment by moving the virtual environment around its current vantage point. The environment view rotation mode can offer a different way in which the user can navigate the virtual environment, such as by "picking up" and manipulating the environment as if it were an object. As the user navigates through the virtual environment in this way, a "comfort mode" similar to that described above can additionally be implemented to help reduce motion sickness related to the simulation. For example, in an environment view rotation mode, the user can rotate a displayed scene around a current vantage point by selecting and dragging the view of the virtual environment around the current user's vantage point. In other words, in the environment view rotation mode, the user's apparent location in the virtual environment appears fixed while the virtual environment can be moved. This is in contrast to other modes, such as, for example, the flight mode described above, in which the environment can generally appear fixed while the user moves. Similar to the scaling factor settings described above for flight mode, in rotating environment view mode, the scaling factor of the displayed environment view can be controlled by portable controllers and / or voice commands (for example, by using portable controllers to select and separate two points in the displayed view to enlarge, etc.). [0128] [0128] For example, as shown in Figure 15, in an example of variation of a 1500 method to operate in an environment view rotation mode, the user can activate a user method information input (1510) such as on a hand controller (for example, a button or trigger or other suitable feature) or any suitable device. In some variations, a hand controller (1520) can be detected by activating the user method information input. The original position of the hand controller at the time of activation can be detected and stored (1522). Subsequently, as the user moves the hand controller (for example, while continuing to activate the user method information input), the current position of the hand controller can be detected (1524). A difference vector between the original (or previous) position and the current position of the hand controller can be calculated (1526), and the position of the user's vantage point can be adjusted (1528) based at least partially on the vector difference calculated, thereby creating an effect that makes the user feel as though he is "picking up" and dragging the virtual environment around. [0129] [0129] In some variations, two portable controllers (1520 ’) can be detected by activating the user method information input. The original positions of the portable controllers can be detected (1522 '), and a central point and an original vector between the original positions of the portable controllers (1523') can be calculated and stored. Subsequently, as the user moves one or more or both of the handheld controllers (for example, while continuing to activate user method information input), the current positions of the handheld controllers can be detected (1524 ') and used to form the basis for the vector difference calculated between the original and current vectors between handheld controllers (1528 '). The position and / or orientation of the user's vantage point can be adjusted (1528 ’), based on the calculated vector difference. For example, the orientation or rotation of the displayed view can be rotated around the center point between the locations of the hand controller, thereby creating an effect that makes the user feel that he is "picking up" and dragging the surrounding environment. Similarly, the screen scale of the virtual environment (1529 ') can be adjusted based on the difference calculated in distance between the two portable controllers, thereby creating an effect that makes the user feel that he is "picking up" and enlarging and reducing the displayed view of the virtual environment. [0130] [0130] Although the user modes described above are described separately, it should be understood that aspects of said modes characterize examples of modes in which a user can navigate the virtual robotic surgical environment, and can be combined into a single user mode. In addition, some of these aspects may be perfectly connected. For example, an aerial viewpoint generally associated with flight mode can be sequentially linked to one or more portals on a trajectory. In addition, in some variations, a vantage point or a displayed view of the virtual environment (for example, as adjusted by one or more of the above user modes) can be linked to at least one standard advantage point (for example, standard position, orientation and / or scale). For example, by activating a user information entry (for example, on a hand controller, pedal, etc.), a user can "reset" the current vantage point to a designated or predetermined vantage point in the virtual environment. The user's current vantage point can, for example, be gradually and smoothly animated to make the transition to the default position, orientation and / or scale values. SUPPLEMENTARY VIEWS [0131] [0131] In some variations, an example of user mode or system modes may display one or more supplementary views of additional information to a user, such as superimposed on or inserted into the main, first person perspective view of the robotic surgical environment virtual. [0132] [0132] As another example, in a virtual command station mode, one or more supplementary views can be displayed in a virtual space with one or more windows or content panels arranged in front of the user in the virtual space (for example, similar navigable menu). For example, as shown in Figure 13, several content windows (for example, 1310a, 1310b, 1310c and 1310d) can be positioned in a semicircular arrangement or in another arrangement suitable for display to the user. The layout of the content windows can be adjusted by the user (for example, using hand controllers with their graphic representations 230 'to select and drag or rotate content windows). Content windows can display, for example, a video transmission from the endoscope, a view of the portal, an overhead view of the virtual operating room "stadium", patient data (for example, images), other camera views or patient information, such as those described in this document, etc. When viewing several panels simultaneously, the user can simultaneously monitor various aspects of the virtual operating room and / or with the patient, thus allowing the user to have a comprehensive and broad perception of the virtual environment. For example, the user can become aware and respond more quickly to any adverse event in the virtual environment (for example, simulated negative reactions from the virtual patient during a simulated surgical procedure). [0133] [0133] In addition, the virtual command station mode can allow a user to select any of the content windows and be immersed in the displayed content (for example, with a first person perspective). This fully immersive mode can temporarily discard other content windows or minimize (for example, being relegated to a HUD overlaid with the selected immersion content). As an illustrative example, in virtual command station mode, the system can display multiple windows of content, including a video transmission from an endoscopic camera that shows the inside of a virtual patient's abdomen. The user can select the video transmission from the endoscopic camera to be fully immersed in the abdomen of the virtual patient (for example, while still manipulating robotic arms and instruments attached to the arms). VIEWS OF THE CAMERAS [0134] [0134] In some variations, a user mode may allow the user to place the virtual camera at a selected vantage point in the virtual environment, and the view of the virtual environment window from the selected advantage point can be displayed on the HUD of so that the user can simultaneously see not only his field of view in first person, but also the view of the camera (the view provided by the virtual camera) that he can update in real time. The virtual camera can be arranged in any suitable location in the virtual environment (for example, inside or outside the patient, above the patient, above the virtual operating room, etc.). For example, as shown in Figure 12, the user can arrange the virtual camera 1220 (for example, using the object's grip as described above) close to a virtual patient's pelvic region and facing the patient's abdomen, in order to provide a "virtual video transmission" from the patient's abdomen. Once placed, a 1220 virtual camera can subsequently be repositioned. A camera view (for example, an inserted circular view or window of any suitable format) can be placed on the HUD as a view of the window showing the transmission of virtual video from the point of view of the virtual camera 1220. Likewise, several cameras can be placed in the virtual environment to allow multiple views of the camera to be shown on the HUD. In some variations, a predetermined array of one or more virtual cameras can be loaded as part of a configuration file for the virtual reality processor to incorporate into the virtual environment. [0135] [0135] In some variations, the system can offer a range of different types of virtual cameras, which can provide different types of camera views. An example of a virtual camera variation is a "cinema" camera that is configured to provide a live virtual broadcast of the virtual environment (for example, seen from the 1212 cinema camera in Figure 12). Another example of variation of the virtual camera is an endoscopic camera that is attached to a virtual endoscope to be disposed on a virtual patient. In this variation, the user can, for example, perform virtually the technique to introduce the virtual endoscope camera to the virtual patient and subsequently monitor the internal workspace within the patient by viewing the virtual endoscopic video transmission (for example, view from the endoscopic camera 1214 in Figure 12). In another example of variation, the virtual camera can be a wide-angle camera (for example, 360-degree, panoramic, etc.) that is configured to provide a greater field of view of the virtual environment. In this variation, the view of the camera in the window can, for example, be displayed as a fisheye or generally spherical screen. [0136] [0136] Various aspects of the camera view can be adjusted by the user. For example, the user can adjust the location, size, scale factor, etc. from the camera view (for example, similar to portal settings as described above). As another example, the user can select one or more filters or other special image effects to be applied to the camera view. Examples of filters include filters that highlight a particular anatomical feature (e.g., tumors) or tissue features (e.g., [0137] [0137] In some variations, the camera view can function similar to a portal (described above) to allow the user to quickly navigate around the virtual environment. For example, with reference to Figure 12, a user can select the 1212 camera view (for example, highlight or grab and pull the 1212 camera view towards himself) to be transported to the camera view vantage point 1212. PATIENT DATA VIEWS, ETC. [0138] [0138] In some variations, a user mode may allow displaying patient data and other information on the HUD or another suitable location on the screen. For example, patient image information (for example, ultrasound, X-ray, MRI, etc.) can be displayed on a supplementary screen, superimposed over the patient (for example, as simulated augmented reality). A user can, for example, view images of the patient as a reference while interacting with the virtual patient. As another example, the patient's vital signs (eg, heart rate, blood pressure, etc.) can be displayed to the user in a supplementary view. [0139] [0139] In another variation, a user mode may allow the display of other appropriate information, such as training videos (for example, example of surgical procedures recorded from a previous procedure), video transmission from a mentor or coach surgeon, etc. providing guidance to a user. VIRTUAL REALITY SYSTEM APPLICATIONS [0140] [0140] In general, the virtual reality system can be used in any suitable scenario in which it is useful to simulate or replicate a robotic surgical environment. In some variations, the virtual reality system can be used for training purposes, such as allowing a surgeon to practice controlling a robotic surgical system and / or practice performing a particular type of minimally invasive surgical procedure using a robotic surgical system. The virtual reality system can allow a user to better understand the movements of the robotic surgical system in response to the user's commands, not only inside, but also outside the patient. For example, a user can use the head-mounted screen under the supervision of a mentor or trainer who can view the virtual reality environment next to the user (for example, through a second head-mounted screen, via an external screen, etc.) and guide the user through operations of a virtual robotic surgical system within the virtual reality environment. As another example, a user can use the head-mounted screen and can see, as seen on the immersion screen (for example, in a content window, the HUD, etc.) a training-related video such as a record of a previously performed surgical procedure. [0141] [0141] As another example, the virtual reality system can be used for surgical planning purposes. For example, a user can operate the virtual reality system to plan the surgical workflow. Configuration files for virtual objects (eg robotic surgical system that includes arms and tool actuator, user console, end effectors, other equipment, patient bed, patient, staff, etc.) can be loaded into a surgical environment virtual robotic as representative of current objects that will be in the current operating room (ie, not virtual, or real). [0142] [0142] As yet another example, the virtual reality system can be used for R&D purposes (for example, simulation). For example, a method for designing a robotic surgical system may include generating a virtual model of a robotic surgical system, testing the virtual model of the robotic surgical system in the virtual operating room environment, modifying the virtual model of the robotic surgical system based on the test, and build the robotic surgical system based on the modified virtual model. Aspects of the virtual model of the robotic surgical system that can be tested in the virtual operating room environment include physical characteristics of one or more components of the robotic surgical system (for example, diameter or length of arm connections). For example, a virtual model of a particular robotic arm configuration can be constructed and implemented in a virtual environment, where the virtual model can be tested for particular arm movements, surgical procedures, etc. (for example, collision probability test between the robotic arm and other objects). In this way, the configuration of a robotic arm (or similarly, any other component of the robotic surgical system) can be at least initially tested by testing a virtual implementation of a configuration, rather than testing a physical prototype, thereby accelerating the R&D cycle and reducing costs. [0143] [0143] Other aspects that can be tested include functionality of one or more components of the robotic surgical system (for example, controlled modes of a control system). For example, as described above, the application of a virtual operating environment can pass status information to the application of kinematics, and the application of kinematics can generate and pass commands based on control algorithms, where the virtual reality processor can use commands to promote changes in the virtual robotic surgical environment (for example, moving a virtual robotic arm in a particular mode according to relevant control algorithms). As such, control software algorithms can be incorporated into a virtual robotic system for testing, refining, etc. without needing a physical prototype of the relevant robotic component, thereby conserving R&D resources and accelerating the R&D cycle. [0144] [0144] In another example, the virtual reality system can be used to allow multiple surgeons to collaborate in the same virtual reality environment. For example, multiple users can use head-mounted screens and interact with each other (and with the same virtual robotic system, the same virtual patient, etc.) in the virtual reality environment. Users can be physically in the same environment or general location, or they can be distant from each other. For example, a user can teleorient the other as they collaborate to perform a surgical procedure on the virtual patient. [0145] [0145] Examples of specific illustrative applications of the virtual reality system are described in further details below. However, it should be understood that applications of the virtual reality system are not limited to those examples and the general application scenarios described in this document. EXAMPLE 1 - ABOUT THE BED [0146] [0146] A user can use the virtual reality system to simulate a scenario on the bed, in which he is adjacent to a patient's bed or table and operates a robotic surgical system and a manual laparoscopic tool. This simulation can be useful for training, surgical planning, etc. For example, the user can staple tissues to a target segment of a virtual patient's intestine using a virtual robotic tool and a virtual hand tool. [0147] [0147] In the above example, the user wears the screen mounted on the head providing an immersive view of a virtual reality environment, and can use portable controllers to navigate within the virtual reality environment to be adjacent to a virtual patient table on the which a virtual patient is lying down. A proximal end of a virtual robotic arm is attached to the virtual patient table, and the distal end of the virtual robotic arm supports a virtual tool trigger that powers the virtual forceps that are positioned inside the virtual patient's abdomen. A virtual manual laparoscopic stapler tool is passed through the virtual cannula and having the distal end positioned inside the virtual patient's abdomen. Additionally, an endoscopic camera is positioned inside the virtual patient's abdomen, and provides a virtual camera transmission showing the surgical workspace inside the virtual patient's abdomen (which includes patient tissue, robotically controlled virtual forceps, and a stapler tool. laparoscopic virtual manual). [0148] [0148] The user continues to view the virtual environment through the immersion screen on the screen mounted on the head, as well as the transmission from the virtual endoscopic camera displayed in the window view in the enlarged view superimposed on the user's field of view. The user has in one hand a hand controller configured to control the robot-driven virtual forceps. The user holds in his other hand a laparoscopic hand controller that is configured to control the virtual laparoscopic hand stapler tool, with the laparoscopic hand controller passing through a cannula mounted on a false patient body made of foam. The laparoscopic hand controller is calibrated to match the virtual laparoscopic hand stapler tool. The user manipulates the hand controller to operate the robot-controlled forceps to manipulate the virtual patient's intestine and discover a target segment of the intestine. With the target segment of the intestine exposed and accessible, the user manipulates the laparoscopic hand controller to apply virtual staples to the target segment using the virtual laparoscopic manual stapler tool. EXAMPLE 2 - COLLISION RESOLUTION FROM THE CONSOLE OF USER [0149] [0149] When using the virtual reality system, a user may want to resolve collisions between virtual components of the virtual robotic surgical system, although the user may not be adjacent to the colliding virtual components (for example, the user may be sitting at a distance from the virtual patient table, as in a virtual user console). In this example, the user wears a head-mounted screen, providing an immersion view provided by a virtual endoscope placed inside the abdomen of a virtual patient. The proximal ends of two virtual robotic arms are attached to separate locations on a virtual patient table, on which the virtual patient is located. The distal ends of the virtual robotic arms support the respective tool drivers that activate the virtual forceps that are positioned inside the abdomen of the virtual patient. The user manipulates the portable controllers to operate the two robot-controlled virtual forceps, which manipulate virtual tissue within the virtual patient. This movement can cause a collision involving at least one of the virtual robotic arms (for example, a virtual robotic arm can be placed to create a collision with itself, the virtual robotic arms can be placed in order to create a collision with each other, a virtual robotic arm can be positioned to create a collision with the patient or a nearby obstacle, etc.). [0150] [0150] The virtual reality system detects the collision based on status information from the virtual robotic arms, and alerts the user to the collision. The system displays a top view or other suitable view from a suitable vantage point of the virtual robotic surgical system, such as in a window view (for example, image-by-image view). The collision location is highlighted in a displayed view of the window, such as by outlining the affected colliding components with a red color or another contrasting color. Alternatively, the user can detect the collision itself by monitoring the transmission of the video camera from the virtual camera located above the virtual patient table. [0151] [0151] Upon becoming aware of the collision, the user can zoom out or adjust the scale of their immersion view of the virtual reality environment. The user can activate an arm repositioning control mode that blocks the position and orientation of the virtual forceps within the patient. Using handheld controllers in a user mode that picks up the object, the user can grab virtual contact points on the virtual robotic arms and reposition (rest) the virtual robotic arms in order to resolve the collision while the control mode maintains the position and orientation of the virtual forceps during arm repositioning. Once the virtual robotic arms are repositioned so that the collision is resolved, the user can return to the previous vantage point, disable the arm repositioning control mode and continue using the handheld controllers to operate the virtual forceps on the virtual patient . EXAMPLE 3 - COORDINATED MULTIPLE RELOCATION SURGICAL INSTRUMENTS FROM THE CONSOLE OF USER [0152] [0152] When using the virtual reality system, a user may find it useful to remain substantially in an endoscopic view and relocate several virtual surgical instruments (eg end effectors, cameras) as a group, rather than individually on the virtual patient, saving time, besides making it easier for the user to maintain contextual awareness of the instruments in relation to the anatomy of the virtual patient. In this example, the user wears a head-mounted screen, providing an immersion view provided by a virtual endoscope placed inside the abdomen of a virtual patient. The proximal ends of two virtual robotic arms are attached to separate locations on a virtual patient table, on which the virtual patient is located. The distal ends of the virtual robotic arm support the respective tool drivers that activate the virtual forceps that are positioned in the pelvic area of the virtual patient. The user can manipulate portable controllers to operate the virtual forceps. [0153] [0153] The user may wish to move the virtual endoscope and the virtual forceps to another target region of the virtual patient's abdomen, such as the spleen. Instead of moving each surgical instrument individually, the user can activate a coordinated relocation mode. When this mode is activated, the view of the endoscopic camera zooms out along the axis of the endoscope at a sufficient distance to allow the user to view the new target region (spleen). A spherical indicator is displayed at the distal end of the endoscope that encapsulates the distal end of the virtual endoscope and the distal ends of the virtual forceps. The user manipulates at least one hand controller to remove the virtual endoscope and virtual forceps from the surgical workspace (for example, until the user can see the distal end of the virtual cannula in the virtual endoscopic view) and then grips and move the spherical needle indicating the pelvic area to the spleen. Once the user finalizes the new destination region by moving the spherical indicator to the new destination region, the virtual endoscope and virtual forceps automatically travel to the new destination region and the view of the virtual endoscopic camera is enlarged to show the new destination region. During this relatively large-scale movement, the user visualizes the virtual environment with a substantial endoscopic view of the virtual environment, allowing him to maintain awareness of the anatomy of the virtual patient instead of shifting his focus between the instrument and the anatomy. [0154] [0154] The foregoing description, for the purpose of explanation, used specific nomenclature to provide a complete understanding of the present invention. However, it will be apparent to one skilled in the art that no specific details are required to practice the present invention. Thus, the foregoing descriptions of specific embodiments of the present invention are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms described; obviously, many modifications and variations are possible in view of the above teachings. The modalities were chosen and described in order to better explain the principles of the invention and its practical applications, thus allowing other experts in the art to make better use of the invention and various modalities with various modifications, as appropriate to the particular use contemplated. The following claims and equivalents are intended to define the scope of the invention.
权利要求:
Claims (20) [1] 1. Virtual reality system to simulate a robotic surgical environment, characterized by the fact that the system comprises: a virtual reality processor configured to generate a virtual robotic surgical environment comprising at least one virtual robotic arm and at least one manual laparoscopic tool virtual; a first handheld device coupled in communication mode to the virtual reality processor to manipulate at least one virtual robotic arm in the virtual robotic surgical environment; and a second hand device comprising a hand portion and the tool characteristic representative of at least the portion of the manual laparoscopic tool, wherein the second hand device is coupled in communication mode to the virtual reality processor to manipulate at least one virtual manual laparoscopic tool in the virtual robotic surgical environment. [2] 2. Virtual reality system, according to claim 1, characterized by the fact that the second handheld device is modular. [3] 3. Virtual reality system, according to claim 2, characterized by the fact that the tool feature is removable from the hand portion of the second hand device. [4] 4. Virtual reality system, according to claim 3, characterized by the fact that the tool characteristic comprises a tool shaft and an axis adapter for coupling a tool shaft to the hand portion of the second hand device. [5] 5. Virtual reality system, according to claim 4, characterized by the fact that the axis adapter comprises fasteners. [6] 6. Virtual reality system, according to claim 3, characterized by the fact that the hand portion of the second hand device is substantially similar to the first hand device. [7] 7. Virtual reality system, according to claim 1, characterized by the fact that the hand portion comprises an interactive feature that activates a function of the virtual manual laparoscopic tool in response to the engagement of the interactive features by a user. [8] 8. Virtual reality system, according to claim 7, characterized by the fact that the interactive feature comprises a trigger. [9] 9. Virtual reality system, according to claim 1, characterized by the fact that the virtual manual laparoscopic tool is a virtual manual laparoscopic stapler. [10] 10. Virtual reality system according to claim 1, characterized by the fact that it additionally comprises a patient simulator comprising a cannula configured to receive at least a portion of the tool characteristic of the second hand device. [11] 11. Method implemented by computer to simulate a robotic surgical environment in a virtual reality system, the method characterized by the fact that it comprises: generating, by one or more processors, a virtual robotic surgical environment that comprises at least one virtual robotic arm and a virtual manual laparoscopic tool; coupling a first handheld device in communication mode to the processors to manipulate at least one virtual robotic arm in the virtual robotic surgical environment; coupling a second handheld device in communication mode to the processors to manipulate the virtual manual laparoscopic tool in the virtual robotic surgical environment, wherein the second handheld device comprises the characteristic of a representative tool of at least a portion of the manual laparoscopic tool; and simulate bedside surgery in the virtual reality system based on a user manipulation of at least one virtual robotic arm with the first handheld device and the virtual manual laparoscopic tool with the second handheld device. [12] 12. Computer-implemented method according to claim 11, characterized by the fact that the tool feature is removable coupled to a hand portion of the second hand device. [13] 13. Computer-implemented method according to claim 12, characterized in that the tool characteristic comprises a tool shaft and a shaft adapter for coupling a tool shaft to a hand portion of the second hand device, and wherein the shaft adapter comprises fasteners. [14] 14. Computer-implemented method according to claim 12, characterized in that the hand portion of the second hand device is substantially similar to the first hand device. [15] 15. Method implemented by computer, according to claim 12, characterized by the fact that the hand portion of the second hand device comprises a trigger that activates a function of the virtual manual laparoscopic tool in response to the trigger engagement by a user. [16] 16. Method implemented by computer, according to claim 11, characterized by the fact that the virtual manual laparoscopic tool is one of a stapler, scissors, dissector, claw, needle holder, probe, forceps and a biopsy tool. [17] 17. Method implemented by computer, according to claim 11, characterized by the fact that it additionally comprises: generating a virtual patient; and receiving at least a portion of the tool characteristic of the second hand device through the virtual cannula inserted into the virtual patient in a virtual surgical field to simulate surgery on the bed. [18] 18. Virtual reality system to simulate robotic surgery, the system characterized by the fact that it comprises: a processor configured to generate a virtual operating room that comprises at least one virtual robotic arm mounted on a virtual operating table and at least one virtual manual laparoscopic tool; a first handheld device coupled in communication mode to the processor to manipulate at least one virtual robotic arm in the virtual operating room; and a second handheld device coupled in communication mode to the processor to handle at least one virtual handheld laparoscopic tool in the virtual operating room, the second handheld device comprising the tool characteristic representative of at least a portion of the handheld laparoscopic tool. [19] 19. Virtual reality system, according to claim 18, characterized by the fact that the processor is additionally configured to generate a virtual patient on the virtual operating table. [20] 20. Virtual reality system, according to claim 19, characterized by the fact that the processor is additionally configured to simulate surgery on the bed in the virtual patient using at least one virtual robotic arm manipulated by the first hand device and the at least one virtual manual laparoscopic tool manipulated by the second handheld device.
类似技术:
公开号 | 公开日 | 专利标题 US11011077B2|2021-05-18|Virtual reality training, simulation, and collaboration in a robotic surgical system US10610303B2|2020-04-07|Virtual reality laparoscopic tools US20190000578A1|2019-01-03|Emulation of robotic arms and control thereof in a virtual reality environment JP6916322B2|2021-08-11|Simulator system for medical procedure training US11270601B2|2022-03-08|Virtual reality system for simulating a robotic surgical environment AU2017442686B2|2020-10-22|Multi-panel graphical user interface for a robotic surgical system Bihlmaier et al.2016|Learning dynamic spatial relations US20210369354A1|2021-12-02|Navigational aid US20210315645A1|2021-10-14|Feature identification KR102366023B1|2022-02-23|Simulator system for medical procedure training KR20220025286A|2022-03-03|Simulator system for medical procedure training
同族专利:
公开号 | 公开日 CN110800033A|2020-02-14| EP3646309A1|2020-05-06| US20190133689A1|2019-05-09| CA3066476A1|2019-01-03| AU2018292597B2|2021-07-08| US11013559B2|2021-05-25| JP2020523629A|2020-08-06| US10610303B2|2020-04-07| WO2019005983A1|2019-01-03| KR20200012926A|2020-02-05| US20200261159A1|2020-08-20| AU2018292597A1|2020-01-16|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US8600551B2|1998-11-20|2013-12-03|Intuitive Surgical Operations, Inc.|Medical robotic system with operatively couplable simulator unit for surgeon training| US8527094B2|1998-11-20|2013-09-03|Intuitive Surgical Operations, Inc.|Multi-user medical robotic system for collaboration or training in minimally invasive surgical procedures| FR2808366B1|2000-04-26|2003-12-19|Univ Paris Vii Denis Diderot|VIRTUAL REALITY LEARNING METHOD AND SYSTEM, AND APPLICATION IN ODONTOLOGY| US7607440B2|2001-06-07|2009-10-27|Intuitive Surgical, Inc.|Methods and apparatus for surgical planning| US8010180B2|2002-03-06|2011-08-30|Mako Surgical Corp.|Haptic guidance system and method| CN1263433C|2003-08-15|2006-07-12|北京航空航天大学|Bone surgery device of robot navigation| US7317955B2|2003-12-12|2008-01-08|Conmed Corporation|Virtual operating room integration| US20070275359A1|2004-06-22|2007-11-29|Rotnes Jan S|Kit, operating element and haptic device for use in surgical simulation systems| US8414475B2|2005-04-18|2013-04-09|M.S.T. Medical Surgery Technologies Ltd|Camera holder device and method thereof| KR101298492B1|2005-06-30|2013-08-21|인튜어티브 서지컬 인코포레이티드|Indicator for tool state and communication in multiarm robotic telesurgery| US9789608B2|2006-06-29|2017-10-17|Intuitive Surgical Operations, Inc.|Synthetic representation of a surgical robot| CN200979766Y|2006-12-13|2007-11-21|王行环|A training simulator for laparoscopic surgical procedures| JP4916011B2|2007-03-20|2012-04-11|株式会社日立製作所|Master / slave manipulator system| US8073528B2|2007-09-30|2011-12-06|Intuitive Surgical Operations, Inc.|Tool tracking systems, methods and computer products for image guided surgery| DE102008013495A1|2008-03-10|2009-09-24|Polydimensions Gmbh|Haptic impression producing device for laparoscopic surgical simulation, has upper and lower joints fastened at pivot and working points, where Cartesian movements of kinematic device are transformed into rotatory and translatory movements| US8521331B2|2009-11-13|2013-08-27|Intuitive Surgical Operations, Inc.|Patient-side surgeon interface for a minimally invasive, teleoperated surgical instrument| DE102010029275A1|2010-05-25|2011-12-01|Siemens Aktiengesellschaft|Method for moving an instrument arm of a Laparoskopierobotors in a predetermined relative position to a trocar| DE102012206350A1|2012-04-18|2013-10-24|Deutsches Zentrum für Luft- und Raumfahrt e.V.|Method for operating a robot| US10408613B2|2013-07-12|2019-09-10|Magic Leap, Inc.|Method and system for rendering virtual content| WO2014151621A1|2013-03-15|2014-09-25|Sri International|Hyperdexterous surgical system| KR102154521B1|2013-03-15|2020-09-10|인튜어티브 서지컬 오퍼레이션즈 인코포레이티드|System and methods for positioning a manipulator arm by clutching within a null-perpendicular space concurrent with null-space movement| CN112201131A|2013-12-20|2021-01-08|直观外科手术操作公司|Simulator system for medical procedure training| CN104915979A|2014-03-10|2015-09-16|苏州天魂网络科技有限公司|System capable of realizing immersive virtual reality across mobile platforms| KR20170083091A|2014-11-13|2017-07-17|인튜어티브 서지컬 오퍼레이션즈 인코포레이티드|Integrated user environments| US20160314717A1|2015-04-27|2016-10-27|KindHeart, Inc.|Telerobotic surgery system for remote surgeon training using robotic surgery station coupled to remote surgeon trainee and instructor stations and associated methods| US20170076016A1|2015-09-10|2017-03-16|Maysam MIR AHMADI|Automated layout generation| US9861446B2|2016-03-12|2018-01-09|Philipp K. Lang|Devices and methods for surgery| US10799294B2|2016-06-13|2020-10-13|Synaptive Medical Inc.|Virtual operating room layout planning and analysis tool| US10888399B2|2016-12-16|2021-01-12|Align Technology, Inc.|Augmented reality enhancements for dental practitioners| US10610303B2|2017-06-29|2020-04-07|Verb Surgical Inc.|Virtual reality laparoscopic tools|US9332285B1|2014-05-28|2016-05-03|Lucasfilm Entertainment Company Ltd.|Switching modes of a media content item| JP6515615B2|2015-03-19|2019-05-22|セイコーエプソン株式会社|Control device, control method of electronic device, and program| US11270601B2|2017-06-29|2022-03-08|Verb Surgical Inc.|Virtual reality system for simulating a robotic surgical environment| US11011077B2|2017-06-29|2021-05-18|Verb Surgical Inc.|Virtual reality training, simulation, and collaboration in a robotic surgical system| US10610303B2|2017-06-29|2020-04-07|Verb Surgical Inc.|Virtual reality laparoscopic tools| US10810416B2|2018-12-14|2020-10-20|Palo Alto Reseach Center Incorporated|Method and system for facilitating dynamic materialization for real-world interaction with virtual reality| EP3924805A1|2019-02-12|2021-12-22|Intuitive Surgical Operations, Inc.|Systems and methods for facilitating optimization of an imaging device viewpoint during an operating session of a computer-assisted operation system| US11262887B2|2019-09-13|2022-03-01|Toyota Research Institute, Inc.|Methods and systems for assigning force vectors to robotic tasks| US10705597B1|2019-12-17|2020-07-07|Liteboxer Technologies, Inc.|Interactive exercise and training system and method| GB2593473A|2020-03-23|2021-09-29|Cmr Surgical Ltd|Virtual console for controlling a surgical robot| NL2025627B1|2020-05-20|2021-12-07|Adjuvo Motion B V|A virtual or augmented reality training system| US20210378768A1|2020-06-05|2021-12-09|Verb Surgical Inc.|Remote surgical mentoring| CN112568998A|2020-12-08|2021-03-30|北京美迪云机器人科技有限公司|Remote master-slave interactive medical system and method|
法律状态:
2021-11-03| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201762526896P| true| 2017-06-29|2017-06-29| US62/526,896|2017-06-29| US16/018,012|US10610303B2|2017-06-29|2018-06-25|Virtual reality laparoscopic tools| US16/018,012|2018-06-25| PCT/US2018/039778|WO2019005983A1|2017-06-29|2018-06-27|Virtual reality laparoscopic tools| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|